Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Google could, you know, use their own DNS servers for this...


No, they couldn't. The whole purpose of these probe requests is to assess whether the DNS server used by a particular client is acting normally (responding with NXDOMAIN if a domain does not exist), so these must bde sent to the DNS server of the client, which effectively means that unless this DNS server performs the hijacking that is to be detected, they will inevitably end up on a root DNS server, because no server in the hierarchy will know those domains.

Forcing these probe requests onto Google's DNS would completely defy their purpose in the first place.


Instead of http://asdoguhwrouyh, they could probe something like http://asdoguhwrouyh.google or anything else in a zone owned by them, so the uncachable traffic would hit only their authoritative name servers and not the root servers.


But then a lying DNS server could easily identify those, and NOT lie about http://*.google -- the reason these requests are entirely random domain names is so they're not easily recognized as probes.


Except that the queries are already totally identifiable as probes in their current form, which is demonstrated in the article.


... only when the delegation for google. is cached.


ah, good point.


It wouldn't usually help to use 8.8.8.8, but they probably could use their own authoritative servers instead of the root servers. Look up <random chars>.dnstest.google.com or <random chars>.dev or something.

The problem with this is, of course, that a malicious resolver could detect this and NXDOMAIN those queries, while passing others through. I don't see what the incentive would be for ISPs to do that, but ISPs are weird.


> that a malicious resolver could detect this

I assume the reason for changing from a 10 char random string to a 7-14 char random string was exactly because some ISP's were detecting it...


Unfortunately the commit message doesn't explain why the change was made:

https://chromium.googlesource.com/experimental/chromium/src/...


@agl?? You here? Do you remember the motivation for this change?


They could just use DNSSEC...


DNSSEC has to be supported by your ISP’s DNS server, which they won’t if they’re trying to intercept your queries.


They're suggesting you install your own local DNS server --- nothing prevents you from doing that, and just talking straight to the roots instead of through your ISP's DNS server.

The real problem is not your ISP, but rather the fact that the most important sites on the Internet have rejected DNSSEC and aren't signed. DNSSEC can't do anything for you with hostnames in zones that haven't been signed by their operators, and, to a first approximation, every zone managed by a serious security team (with a tiny number of exceptions like Cloud Flare, who sells DNSSEC services) has declined to do so.


Do any ISPs intercept upstream requests when running your own recursive resolver? If not, DNSSEC isn’t relevant here and you should be fine just fine with “only” running your own without requiring DNSSEC.


They could. I doubt they do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: