Hacker News new | past | comments | ask | show | jobs | submit | mrled's comments login


I am kind of syntax agnostic and would be happy to use more complicated syntax in exchange for more power. (I have a lot of HTML inside my Markdown files, too.) However, my use of rST has been in Sphinx, and I want to love it because it's quite powerful, but it's so slow. Am I missing some configuration or third party package to fix this? I wrote about 15k words of English text in rST in Sphinx to document a project[^1], and Sphinx's build speed was many times more an impediment than my unfamiliarity with rST.

[1]: https://pages.micahrl.com/progfiguration/


The notes on browser privacy imo are too significant to have been relegated to a footnote:

As part of the drafting of the 2015 finding on Unsanctioned Web Tracking, the then-TAG (myself included) spent a great deal of time working through the details of potential fingerprinting vectors. What we came to realise was that only the Tor Browser had done the work to credibly analyise fingerprinting vectors and produce a coherent threat model. To the best of my knowledge, that remains true today.

Other vendors continue to publish gussied-up marketing documents and stroppy blog posts that purport to cover the same ground, but consistently fail to do so. It's truly objectionable that those same vendors also prevent users from chosing disciplined privacy-focused browsers.

To understand the difference, we can do a small thought experiment, enumerating what would be necessary to sand off currently-identifiable attributes of individual users. Because only 31 or 32 bits are needed to uniquely identify anybody (often less), we want a high safety factor. This means bundling users into very large crowds by removing distinct observable properties. To sand off variations between users, a truly private browser might:

- Run the entire browser in a VM in order to:

  - Cap the number of CPU cores, frequency, and centralise on a single instruction set (e.g., emulating ARM when running on x86). Will likely result in a 2-5x slowdown.

  - Ensure (high) fixed latency for all disk access.

  - Set a uniform (low) cap on total memory.
- Disable hardware acceleration for all graphics and media.

- Disable JIT. Will slow JavaScript by 3-10x.

- Only allow a fixed set of fonts, screen sizes, pixel densities, gamuts, and refresh rates; no more resizing browsers with a mouse. The web will pixelated and drab and animations will feel choppy.

- Remove most accessibility settings.

- Remove the ability to install extensions.

- Eliminate direct typing and touch-based interactions, as those can leak timing information that's unique.

- Run all traffic through Tor or a similarly high-latency VPN egress nodes.

- Disable all reidentifying APIs (no more web-based video conferencing!)

Only the Tor project is shipping a browser anything like this today, and it's how you can tell that most of what passes for "privacy" features in other browsers are anti-annoyance and anti-creep-factor interventions; they matter, but won't end the digital panopticon.


Oh man, you're right, I didn't realize they worked this way. This basically means there is no compromise at all, I'm going to update the post. Thanks!


Well, there's still a compromise to be fair. It's defintely more work to manage these sprites and it's especially annoying when there's more than one state. I think it's possible to write some tool to automate it, but I haven't found one.


There are preprocessors that will do this. Conceptually, we would:

  include sprite-1.snippet
  include sprite-2.snippet
and it would write the defs into the page. Then later in the page, `<use>` the defs you included.


Ohh, interesting, I have never heard of SMIL. For this post I was thinking mostly of static styling (... and got a little carried way with interactive stuff in the diagram...) but I'll have to look into SMIL in the future.


This is a great point. I'm going to test some of the `<use>` suggestions I got in this thread, but if those don't pan out I'll definitely do this.


Huh. I'm the OP, and I do have a dark mode that respects `prefers-color-scheme: dark` -- or at least, it works for me (tm). Would you mind sharing details about your dark mode theme? Is it a third party extension or maybe a browser I haven't tested?


I'm on Windows and in the system-level settings app, there's a toggle for dark mode. When I turn that on, then `prefers-color-scheme: dark` starts matching. There are zero third-party extensions or styles here, and my browser is Chromium 118.


I think the main cause of black rectangle is the lack of support for nested CSS. At least that's what I'm seeing in my browser.


Ohhh interesting. To anyone hitting this, I'm curious what specific browser you're using - I thought it was available ~everywhere now? https://caniuse.com/css-nesting


Seeing only inverted smilie over black background on my iPad, dark mode or light both.


I can't replicate on mine. If this is due to nested styles, I think you are behind on your software updates :). But also, maybe I need to hold off a bit longer before moving to nested styles.


According to a forum post [1], e.g. iPad Mini 4 was discontinued March 2019 and is stuck on iOS 15, which doesn't support nested styles. Perhaps the issue is that people want to continue using old tablet devices that are no longer getting OS updates?

[1] https://education.apple.com/resource/250012027


It's only been available everywhere since like 2023. My browser happens to be from 2022.


I think this is more like a webserver sniffing the user agent and choosing not to serve the request, not like sending a webserver bad data such that it isn't able to serve the request. I'm concerned that passkeys end up in a "This site is best viewed in Internet Explorer" mindset, where passkey providers that would work fine are detected and prohibited because the website operators want them to enforce user behavior.


In the sense of "I refuse to support browsers that only support tls 1.0", definitely. "Just let the user turn off TLS, why do you hate choice" isn't the instant win you might hope it is.


No, again, the protocol between the site and the authenticator is unchanged. It's much more like DRM that doesn't let 4K media play on systems that allow the user to do whatever they want, but in this case instead of the DRM preventing the user from copying someone else's copyrighted work, it's preventing the user from copying their own data.


I agree that it's not an unqualified win. If sites block passkey apps that allow exporting unencrypted passkeys, that probably will prevent some accidental passkey leaks.

It's just that it's not an unqualified win to allow sites to block passkey apps either. If we allow that, we can get to a place where sites block apps for the wrong reason, or it becomes more expensive to develop passkey apps so there is less competition for secure passkey apps.

It's not just whether it's a good idea to allow unencrypted exports. It's whether it's a good idea to give websites a say in how we manage credentials.


Oh man this is really cool. I have also written a Python infrastructure-as-code project (https://pages.micahrl.com/progfiguration/), I really like the idea of using a programming language rather than a text document to define infrastructure. Yours looks very polished, and the built in support for testing in Docker is a brilliant idea.


I am really curious if the DDOS tried to follow them to the new infra and failed to cause an outage or not. Apparently the perpetrator noticed when they got Cogent to narrow the null route, but the blog post notes they still can't access the original subnet in that datacenter. Are they still trying to knock Sourcehut offline? Is the DDOS still pointing at now deprecated infra for some reason?


When they switched DNS over to point to the AMS datacenter, the DDOS attack followed it until it got smacked down by the OVH NAT.


> At about 06:30 UTC the following morning, the DDoS escalated and broadened its targets to include other parts of our PHL subnet. In response, our colocation provider null routed our subnet once again. This subnet has been unreachable ever since.


Right, that's expanding to the rest of the subnet in their old DC. They've since migrated to the new DC with new countermeasures. Did the DDOS follow and the countermeasures are working? Or if it didn't follow, why not?

There's also the question of whether the DDOS is still even trying the old infrastructure. The post says it's unreachable, but that would be true if the null route hadn't been removed yet.


Yes, the DDoS followed us to networks with countermeasures, and yes, the countermeasures worked. We don't want to disclose too much about that, though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: