You can buy a domain, put public NS servers on it for the only purpose of doing Letsencrypt DNS validation. Hint: Create root and wildcard (eg domain.ca and *.domain.ca) so you aren't leaking internal DNS records (not that it matters much).
You run an internal DNS server (Pihole + unbound is my combo of choice) which becomes authoritative for your internal LAN.
This is mind-blowing. Last I checked, the front page of HN sends tens of requests per second to each link. There are humans who can pack envelopes faster than the typical mastodon server can answer GETs. I'd love to see someone benchmark the top servers for a few seconds to see what it takes to break a reasonable latency SLA.
One inconvenience is that although RFC8657 explains how to tell a CA that it must use particular methods, the most obvious public CA (Let's Encrypt) has not shipped RFC8657 support. So you can write a CAA record which says "Only Let's Encrypt may issue" or indeed say "Only Sectigo may issue" but you cannot write a record which says e.g. "Only Let's Encrypt may issue, and they must use the tls-alpn-01 method". Or rather, you can write that record but it won't work.
Now, there are a bunch of things you could do about that, and I believe this cool toy does one of the obvious ones: Don't have any certificates for the problematic domain. The web site isn't in the domain you can mess with. But it would be nice if Let's Encrypt got to this, periodically I check so far each time somebody has pestered them for RFC 8657 recently, so I don't pile on since that's unhelpful.
I tried using the 8.8.8.8 one for one domain, but sending emails from Gmail to that domain failed for hours, so it's not perfect. One would at least think that Gmail uses 8.8.8.8 for DNS.
Was it failing at GMail before you flushed at 8.8.8.8? Because then you would have to wait while GMail's own cache expired, assuming they even use the public 8.8.8.8 service.
Domains themselves don't use DNS servers in the same way your network connection needs DNS to work right. Did you try using 8.8.8.8 for your domain's nameservers? Because that's a misconfiguration - your domain's nameservers need to be configured to be authoritative for your domain - which Google's HonestDNS is not going to claim to be. (Even if you're using GCP.)
I think he meant he used the Flush Cache functionality of 8.8.8.8 but emails still failed on Gmail until their cache was invalidated (so Gmail probably is not using 8.8.8.8, or the flush cache doesn't actually works as intended).
caches are hierachical. Even your local machine most of the time now with recent OS releases will cache records, then your home router or some other DNS server on your network will often cache things before then referring to your ISPs or Googles DNS server.
Invalidating the cache at one doesn't invalidate the cache downstream of them if they already looked the record up recently. But it does mean that anyone who hadn't looked up the record will get the correct result straight away.
And it's not like they need to hold on to any state to work. If you had access you could purge everything and have them start fresh from the root servers, and it would work fine. (As long as the load spike doesn't make it decide to do something dumb, of course.)
Maybe it's just me, but this seems like the more obvious behaviour? Personally I'd typically extract in /tmp/relevant-name, and sometimes that results in /tmp/relevant-name/relevant-name.
Doesn't seem a big deal or require/cause trust issues to me.
(And when I create one, I always have to check/look up what happens, so it doesn't surprise me that a variety of things get done at all.)
It's been common convention for decades that if you distribute a source tarball of something, that everything inside is inside a directory named foobar-1.0 where 'foobar' is the project name and 1.0 is the version.
Not everyone does this, of course, but it's nice when they do. Because it means you can just wget the file into a dir and untar it without worrying about it messing up whatever is already there. Also handy for putting different versions of the project side-by-side.
Ok, but like you say it's a mixed bag - I 'wget the file into a dir and untar it without worrying about it messing up whatever is already there', because nothing is, it's a mktemp -d or manual equivalent.
That was typical on DOS/Windows when distributing ZIP archives, for a long time.
But on *nix systems, the idiom for tarballs usually includes a directory containing all of the contents.
> (And when I create one, I always have to check/look up what happens, so it doesn't surprise me that a variety of things get done at all.)
True - I usually do a `tar tf foo.tar.xz |head` to get a quick peek at the archive. This generally avoids the problem of dumping a bunch of files into the current directory.
Go to fast.com, open webdev tools on the network tab. Run a speedtest, the hosts are the local caches (OpenConnect) that you would use to stream movies.
Note: I don't know for sure, but its the most likely.
This boggles me when I see this option in any password manager (and I think every single one has this 'option').
Why do password managers let people store TOTP next to the password, this completely invalidates the 2FA of TOTP if your password manager get broken into.
> this completely invalidates the 2FA of TOTP if your password manager get broken into
I think that's the big "if". If you assume the password manager is secure (which something clearly wasn't in this case, but that seems like an outlier), TOTP secret in the password manager still secures the account.
Is such a setup as protective as a separate storage method? No, but it's leagues more convenient. A cloud-based PW manager also solves the problem of a lost/broken/new phone causing you to lose all of your 2FA setups. Some 2FA apps do as well (Authy, iirc), but trust me when I say people lose 2FA codes _all the time_. And then 2FA needs to be disabled by support, which is its own can of worms.
The best security measures are the ones people actually use. If not having to use a separate app is the convenience people need, then I think it's totally worth it.
I mean, if the password manager’s store is compromised, then sure, okay. But if only the application password is compromised then it’s still 2FA since the attacker cannot authenticate with just the password.
The F in 2FA is factor. Satisfying one login request from one factor (password vault) is 1FA. This is why the second factor is normally something that isn't your password vault (historically your head, now a piece of software): a hardware key, a recovery code, etc.
A slightly more generous interpretation is 1.49A (rounds down), because someone with a reused username/password combination. But if you're using a vault with a sophisticated factor, the venn diagram of "people who have your password," and "people who also have your master password," are pretty tight, except for cases where the provide has been breached (all bets are off).
Don't dispose of the second factor for convenience.
And the A in 2FA is authentication, not storage. The password vault is not a factor because it is not what is provided for authentication, the individual password is the factor. The fact that the vault being compromised reveals both factors does not make it no longer 2FA.
Colocating the storage factors definitely makes certain attack vectors possible that aren’t otherwise possible, but it’s still 2FA. Are hardware keys best? Likely, but still many probably have their password vault and TOTP application and storage on the same device (e.g. both Bitwarden and Authy on their mobile device) which is a middle-ground convenience vs. security between TOTP in the password vault and hardware keys—but I doubt many would say that it’s not 2FA.
Because I already use MFA to access my password manager in the first place, and don't want to deal with managing backups for each flavor of MFA app that is pushed on me.
How do you manage MFA for encryption-at-rest? None of the common TOTP systems do this. LastPass and 1Pass have built-in "local encryption keys", but they're stored in the same place as the store and only protected by your password. I think theoretically you could set this up with Keepass using a Composite Master Key (combining a password-protected key and a certificate-protected key, storing the certificate separately, ideally in an HKM), but I don't know anyone who does this.
Or just keep them somewhere that isn’t directly beside the password?
I have my password in a password database, and my TOTP tokens on my phone and a Yubikey.
I have a second “break glass in case of emergency” password database that contains TOTP secrets for all my most essential accounts and a backup of the key loaded on my Yubikey.
It happens a lot, when you have so much infrastructure and redundancy you think it is too big to fail. Then you lose S3 in US-East1 and break everything.
You run an internal DNS server (Pihole + unbound is my combo of choice) which becomes authoritative for your internal LAN.