Hacker News new | past | comments | ask | show | jobs | submit login
Chromium's Impact on Root DNS Traffic (apnic.net)
438 points by jakob223 on Aug 21, 2020 | hide | past | favorite | 211 comments



Couldn't the traffic be somewhat reduced by changing the time and order of operations?

Currently, Chrome does the following:

(1) on each network change, send three DNS requests with random hostnames.

(1a) If at least two of the queries resolve to the same IP, store the IP as the "fake redirect address".

(2) on a user search, query the first search term as DNS.

(2a) If the query result is NXDOMAIN or matches the fake redirect address, do nothing. Otherwise, show the "local domain" hint.

Instead, it could do:

(1) on a user search, query the first search term as DNS.

(1a) if the query comes back with NXDOMAIN, don't show the hint and stop. We're done.

(2) otherwise, make two more DNS queries with random domain names to check for fake redirects.

(2a) if the two queries resolve to the same IP as the first one, we have a fake redirect. Don't do anything. Otherwise, show the "local domain" hint.

Results of step (2) could be cached until a network change.

This would only require 2 instead of 3 probe queries and only if the user actually searched for something and if the search term actually caused a DNS match (fake or genuine).


> (1a) If at least two of the queries resolve to the same IP, store the IP as the "fake redirect address".

From reading the source, it actually does a HTTP HEAD chasing redirects, and records the origin of the final page, and uses that as the redirect address. So even if two hostnames yield different IPs, if they end up redirecting to same hostname, it will be detected

> (2a) if the two queries resolve to the same IP as the first one, we have a fake redirect. Don't do anything. Otherwise, show the "local domain" hint.

What if an ISP uses multiple IPs in the fake redirect, and alternates over those IPs in each successive response?


> What if an ISP uses multiple IPs in the fake redirect, and alternates over those IPs in each successive response?

Good point. I was wondering how they'd deal with that in the actual implementation.

I think you got the answer though: They match HTTP origins instead of IP addresses - so I imagine, you could do the same in step 2: Do a HTTP HEAD query to the search word and two additional ones to random hostnames, following redirects. If the final origins are the same, there is fakery going on.

A problem with this could be unexpected HEAD requests to actual internal hosts: There is no guarantee an internal host that was never meant to receive HEAD requests would react gracefully or in any way predictable to one.

I'm not sure how they solve this currently. Maybe this could at least be mitigated by only sending the HEAD request to the search word host if there is reasonable suspicion requests are being redirected - e.g. only if the two random hosts resolved and were both redirected to the same origin.

Finally, you could cut all of this short by also connecting to (search word):443 and trying to initiate a TLS session handshake. If the host answers, you know it's probably a genuine internal host that talks HTTPS and you don't need to do any additional probes. (And you can also abort the handshake and don't ever need to send an actual HTTP request to the host)


> A problem with this could be unexpected HEAD requests to actual internal hosts: There is no guarantee an internal host that was never meant to receive HEAD requests would react gracefully or in any way predictable to one. I'm not sure how they solve this currently

A perfectly legitimate answer is to not even try. Per the HTTP spec, sending HEAD is supposed to be harmless. If it causes harm, that’s a violation of the spec. I’m sure some legacy server out there segfaults on HEAD, but that’s not a browser vendor’s problem, and it isn’t up to the browser vendor to do anything to try to prevent it. Browsers (not just Chrome) send HTTP HEAD in other scenarios as well. And I think this problem is relatively rare in practice, since one hears few reports of it actually occurring


I disagree. In this case, it would be Google who is changing the status quo.

I think it's legitimate for a fully internal server on an internal network that only handles internal clients to only implement the parts of the HTTP spec that are relevant to the exchange.

If Google worms its way into that network and starts to talk to random servers, they'd be at fault for causing problems.

I think this holds particularly strong for this case as the requests are not tied to an obvious user action: If I typed "http://dumbo/" into the address bar and the browser's GET request broke the server, I'd be more inclined to view the server's team at fault than if I just searched for "dumbo" and found out the browser broke a server that I didn't even know existed.

Independent of that, it's of course good advice to always build your services as robust as possible - internal or not - and to follow the spec wherever you're able to.


> I disagree. In this case, it would be Google who is changing the status quo.

This code has been in Chrome since 2010. When you've been doing something for the last ten years, you aren't changing the status quo, now you are part of the status quo.

> I think it's legitimate for a fully internal server on an internal network that only handles internal clients to only implement the parts of the HTTP spec that are relevant to the exchange.

So, there are two ways of "not implementing HEAD": (1) ways that don't harm the availability of the service or other connections to it (return a HTTP error, abruptly close the connection, etc) (2) ways which do (e.g. crash the whole service upon HTTP HEAD on a single connection)

If a service isn't implementing HTTP HEAD in way (1), then Chrome isn't going to hurt it. If a service isn't implementing HTTP HEAD in way (2), then it is buggy, poorly written, and also insecure (HTTP HEAD becomes a denial of service attack), and that's not Google's problem, that's the problem of whoever maintains that service

In practice, few services which don't implement HEAD in way (2) are even going to exist, because browsers (both Chrome and others) regularly send HTTP HEAD in other circumstances as well (e.g. CORS checks). If HTTP HEAD makes your service crash, your service is going to be crashing a lot even if Google had never implemented this particular feature


That's pretty elegant imo. I think the NXDOMAIN of 1a can be cached too. If we get a result on the next search query it should be safe to assume it's a legit one.

Maybe, at the risk of over-engineering, additionally cache the results for the last N networks persistently. Something like (gateway, DNS, localip) as key. I could see those three being identical on different networks though... And assuming the article is right and most ISPs globally do not mess with NXDOMAIN, this might not be necessary anymore with this proposal.


This adds at least one full RTT, which chome is very much about minimizing them on user queries.


True, but I think it would be acceptable in this situation:

- the queries only affect the time after which the "local domain" hint appears. They don't influence the time until the main search results appear.

- if the result is cached, the additional roundtrip is only for the first hostname entered after a network change.

- the two "random hostname" probes can be executed in parallel, so it should not result in more than one additional roundtrip.


The first question I asked to myself: Is there a way to disable it? Networks i'm attached to, don't do any hijacking.

And yes, luckily there is a policy to disable it: https://cloud.google.com/docs/chrome-enterprise/policies/?po...

Registry key: Software\Policies\Google\Chrome\DNSInterceptionChecksEnabled

PowerShell: Set-ItemProperty HKLM:\SOFTWARE\Policies\Google\Chrome -Name DNSInterceptionChecksEnabled -Value 0 -Type DWord

If you are managing Chrome via GPO, you should do it via GPO. Templates can be downloaded here: https://chromeenterprise.google/browser/download/


I wouldn't apply this policy to road warriors even if they spend most of the time in a location you have under control.


Wait, so Chrome leaks the first word of my searches to my ISP? That doesn’t sound like something I want to happen


That's another reason to use an internal DNS server which queries an upstream DOH server.


"That's another reason to use an internal DNS server which queries an upstream DOH server."

Even better, spin up a little VM or VPS somewhere in the cloud, install 'unbound' as a recursive resolver and point it to your nextdns.io account/address.

Let's unpack this ... backwards ...

DNS servers out on the Internet are queried by nextdns, which presumably has no PII from you other than your CC number[1] and zip code.

Nextdns receives nothing but queries from some random VPS/EC2/VM IP. Again, presumably a provider that knows (almost) nothing about you.

Your ISP sees nothing ... just encrypted DNS traffic.

It's win, win, win.

You see no ads, since nextcloud.io acts like a pihole and strips/blocks all of the malicious hostname lookups.

[1] Remember, only AMEX verifies cardholder FIRST LAST. Use your VISA/MC. I think my first/last is Nextdns User or whatever ... YMMV if a merchant is enrolled in that weird "verified by visa" service ...


I still don't understand what's nextdns.io doing in the stack.

Couldn't you just run your recursive resolver as recursive resolver and let it ask respective authoritative servers directly, instead of forwarding to the middleman? You can run your own blocklists on your unbound/kresd/whatever.

Then DNS servers out on the Internet are queried by some random IP from a VPS/EC2/VM IP range, so they are about as wise as when queried by nextdns.io.


Yes, of course nextdns is not required - I simply added it because that is my own setup and it adds the pihole-like ad-blocking to the workflow.

They are my favorite IaaS startup of the last 5-10 years - it is a genius idea and I wish I had thought of it.


> Remember, only AMEX verifies cardholder FIRST LAST. Use your VISA/MC.

Do you have a source for this?


Anecdotally, I use three different Amazon accounts for both personal and business accounts and none of them have a real first/last name on them. In fact, I only use my actual first/last name with online payments when dealing with government agencies or regulated purchases.

But don't take my word for it:

https://ux.stackexchange.com/questions/31006/should-we-ask-f...

"While you're correct that Visa and MasterCard do not validate this information, that's not true of all credit card providers."

... actually, the entire stackex discussion at that link is fairly interesting ...


Thanks!


Last I checked, I think “DNS Security” (DoH) was shipped in Chrome, you can pick an alternative in Settings, I think. Such as, in this case, Google. Not sure if that changes the way this nxdomain check behaves, presumably Chrome trusts TLS but not the ISP’s DoH?


For DoH Chrome does not do that check. Instead one of the requirements to be one their allowed DoH providers is that they don't do the evil redirect NX Domain responses.

But Chrome also falls back to try non-DoH on NX-Domain, so it doesn't really help. I guess they need to do that so internal domains work correctly.


I'm always baffled why chrome can't have a separate search/address bar, which avoids this issue entirely.


To me, the single bar functions kind of like a CLI for the browser. I regularly use it to:

* Type/paste a URL

* Type/paste a search

* Search my browser history (usually to jump to a previous URL)

* Use search engine keywords to do direct searches on some applications I use regularly (eg, "jira P-123" does a search in JIRA directly, which happens to jump to that ticket directly)

Browsers that separate those two drive me a bit crazy, because of the extra thinking required before typing.

(I don't really like this whole "using the first search term as DNS lookup" but that's separate from the UX of single vs separate inputs.)


There is no extra thinking, at least for me: search on the right, URLs and history on the left. I mean: it's automatic, my fingers know what to do. I guess search engine keywords would go to the right too but I don't use that feature.


Firefox's UI is kind of all over the place, but I love that I can paste or type a search term, and then arrow-key my way down to a set of search options at the bottom.


They design for an audience that, on average, cannot distinguish betweeen the address bar of the browser and the search bar on the Google homepage.


This isn't always a good thing.

My grandmother doesn't know the difference between an "address bar" or "search bar". Recently she got an email from her insurance company telling her to go to their website www.whateverinsurance.com and click "sign in" and then click "my account" and update her credit card info. The email had the url but it wasn't a link for some stupid reason. She goes to her browser and types in "www .whateverinsurance..com" because her eyesight isn't very good anymore and presses enter. Then instead of giving her an error saying the website doesn't exist and she should re-try entering it, it goes to a Google search page! She clicks "sign in" but her password doesn't work because she's on Google instead of her insurance website. So I get a call and have to figure out why her "insurance isn't working".

When I finally get her to her insurance website, she mistypes her password and presses "log in", and nothing happens. Windows is configured with 175% magnification, which means that the "invalid password" div that appears isn't visible on her screen unless she scrolls to the top of the page!

She originally tried calling her insurance company and updating her credit card number by phone, but she couldn't enter her credit card number fast enough and it timed out and told her to go to the website instead. WHY DON'T THEY TEST THIS STUFF???!?

Sorry, I went on a tangent there. I get irrationally upset by this kind of stuff.


Computers are bad and we should feel bad.


I mean after a certain point you just have to accept the kinds of things that your users will type in whatever text boxes you show them and make it work. If you know what the user is trying to do then it's not good UX to throw an error or tell them "I know you're trying to search, but I won't until you retype it into this other box".

Google Maps is a good example of this. Like the original text box you were shown was searching for an address but enough people typed business search terms that eventually they just implemented that feature.

The Ansible vault is a bad example of this. They have a little command `ansible-vault` that lets you manage encrypted files and strings. If you run `ansible-vault edit ./nonexistent_file` it tells you that you meant `ansible-vault create` and vice versa but doesn't just do it despite the user intent being clear. This ultimately lead me to just patching it to do the right thing.


> The Ansible vault is a bad example of this. They have a little command `ansible-vault` that lets you manage encrypted files and strings. If you run `ansible-vault edit ./nonexistent_file` it tells you that you meant `ansible-vault create` and vice versa but doesn't just do it despite the user intent being clear. This ultimately lead me to just patching it to do the right thing.

IMO it's a bit much to decide what "the right thing" is there. Blindly assuming that someone attempting to edit credentials didn't mistype a file name isn't exactly safe and sounds like a great way to cause problems based on believing you updated something you did not in fact update.


That was my first thought as well. This is going to lead to people typo'ing, opening a blank file, being confused that their credentials are gone, and then adding in the updated credentials in the wrong place.


FYI:

CTRL-L: focus URL bar, typed text will be navigated to or searched for

CTRL-K: focus URL bar, typed text will be searched for

(same in Firefox, with the distinction that Firefox has two UI elements instead of one)


FYI

ALT + D: focus URL bar, typed text will be navigated to or searched for

CTRL + E: focus URL bar, typed text will be searched for

(I find those better since they can be used with my left hand only)


Because it's designed to make as many Google queries as possible instead of being geared for URL entry.


You can prefix your searches in the omnibox with “?” and they won’t be treated as possible short local network names


Also, if you prefix a hostname with "http://" or "https://", it won't be treated as a search.


Ctrl+K for Windows users looking for a shortcut key.


It also works on Firefox (and on Linux); if you use split address/search bar it focuses to the search bar instead as it used to be.


Indeed, I normally turn off "keyword.enabled" and "browser.urlbar.oneOffSearches" in about:config.


Yes, you need keyword.enabled set to false otherwise Firefox will still search even though you have a different search box :(.

In Firefox, it is also possible to set keyword.enabled to false and still search explicitly in the url bar via the keyword search mechanism (they overloaded the terminology a bit :/, the one if you right click on a search box and select "add a keyword for this search"). I've been thinking of trying to just use DDG style ! keywords via the url bar (and s for a default search to make it easier to type, maybe single letters would be easiest in general) rather than a search box. OTOH, the keyword search (the second type :/) seems neglected and I wouldn't be surprised to see it disappear at some point.


That works on multiple platforms! I'm able to do that on Arch.


In the good old times, ctrl-l went to the location bar and ctrl-k went to the search bar. After they merged, ctrl-k just prepends the question mark. But... as an old shortcut, pretty much everything supports it.


Isn’t “?” and ctrl-k the same number of keystrokes?


Only if you don't count your mouse clicking the omnibar as a keystroke.


That’s a good point. I didn’t consider that, and now I’ll use this shortcut. If I had been at my computer I would have realized that immediately.


Chrome leaks 1 word searches in the address bar, yes.


Your local nameserver should be configured to not forward unqualified names upstream.


Any time you say something 'should' be something, it's an indication that sometimes it's not.


There is no such thing as unqualified names at this level. All domain names are fully qualified, and comprise one or more labels.


I'm referring not to a recursive nameserver, but to a caching one that simply forwards queries to an upstream resolver. Like the one in every consumer router. Usually that's dnsmasq, with this option:

       -D, --domain-needed
              Tells dnsmasq to never forward A or AAAA queries for plain names,  without  dots
              or  domain  parts,  to  upstream  nameservers.  If  the  name  is not known from
              /etc/hosts or DHCP then a "not found" answer is returned.


That is not an unqualified domain name, and notice that it does not say that it is.

* http://jdebp.uk./FGA/dns-name-qualification.html

And the words that you are looking for are "resolving" and "forwarding". A proxy DNS server either does query resolution itself or forwards to another proxy DNS server that does. Both sorts can cache, so whether something is a caching server is not the distinction. dnsmasq is choosing whether to forward the query or to do query resolution itself (using a local data source) according to the number of labels in the domain name. As I said, at this level the idea of domain name qualification does not apply.

You are also mis-using "resolver", incidentally. The actual meaning of "resolver" per RFC 1034 is not what people sometimes think it to be. Avoid using "resolver". The mis-use creates confusion.

See https://news.ycombinator.com/item?id=15232208 .


How many people know how to configure their local name server outside of the HN crowd?


This is the default configuration in all consumer routers I've seen. Granted, that's not very many.


So how does it resolve com then? Or the (small number) of sites that are on the TLD.


It doesn't. It's not a recursive resolver. It forwards qualified names (those including a dot) to the upstream nameserver (the ISP's).


My local nameserver is run by comcast!


Really? Your router does not have a caching nameserver built in?



How do you mean? So if I type ihavecancer, it tries to resolve to ihavecancer.com or something?


It tries to resolve `ihavecancer` as a TLD, because it may be a local TLD on your intranet.


WAIT


> Wait, so Chrome leaks

That’s obviously so, because that’s it’s entire raison d’entre.


Its raison d'entre is to leak stuff to Google. While that's a problem, and probably a worse one, it's not the same problem as leaking stuff to a ISP.


Not exactly. Chrome doesn't know if you're trying to enter a domain name, hence URL, or are trying to enter a search term. The Omnibar supports both. So Chrome tries to resolve the string you entered and if it gets back an NXDOMAIN it can assume that it's a search term.

The problem is that some ISPs have configured their DNS resolvers to lie and not return NXDOMAIN. Instead redirecting you to some website for marketing purposes. The Chromium workaround is to try and detect if it is using a lying DNS resolver by issuing queries that it knows SHOULD return an NXDOMAIN.

If this concerns you run your own resolver, enable DNSSEC validation, and enable aggressive NSEC caching(RFC 8198).


This post contains a bunch of information about the question, but it doesn't seem to actually address the question.

The question is: does Chromium send the first word I type to my ISP?

The answer appears to be: yes.


I justed tested this myself via logging the DNS queries, and yes, this is true.


> the first word

No.

The answer appears to be yes if you said "the only word", though.


...which the first word would be, as you typed it?


I think this only happens after you press enter, not as part of the omnibox real time results. After all the infobar mentioned in the article definitely only appears after you commit the search query.


depends how fast you type - there's a delay on the query so it's not re-querying for every keystroke. but probably yes.


So because Google thinks I am too stupid to handle a separate URL box and search box, and they are so much smarter than me that they can write a simple if-else to discern what I want with a few bullshit DNS queries, I’m stuck with a browser that leaks information and fails to do what I want several times a day until I learn to work around this behavior. And the proposed solution is for me, dumb dumb user that I am, to run my own resolver with DNSSEC validation and NSEC caching?

I am getting close to moving to a hut in the woods and forgetting all about the internet.


I moved to the desert in a developing country. But fibre optics to the house took 4 days and is $55/month. There is no escape.


DNSSEC can only distinguish valid from invalid NXDOMAINs on signed zones. A tiny, tiny minority of zones in .COM, .NET, .ORG, and .IO are signed. Installing your own local DNSSEC resolver to "fix" the Chrome URL bar would be a tremendous misallocation of effort.

If your ISP forges NXDOMAIN responses, the correct response is to DOH to a provider that doesn't do that. That's a simple networking config change, for which there is UI in every mainstream operating system. The DNSSEC part of this conversation is just silly.


Do whatever you want as your proposed mitigation, but we are talking about the root zone here, which is signed.


My proposed mitigation is being deployed in every modern browser, and completely eliminates the ISP-spoofed NXDOMAIN problem. Yours asks users to install their own DNS server, and still doesn't eliminate the problem. I'm comfortable saying that my advice is correct, and the advice to use DNSSEC to solve this problem is malpractice.


That infuriates me. It totally can know. Did it start with http:// https:// ftp:// ...? I really dislike how browsers decided everything is a search.


> I really dislike how browsers decided everything is a search.

Not browsers, just chrome. The rest followed.


Verisign has nobody but themselves to blame, for "inventing" this with its SiteFinder fiasco in 2003.


Right! Around 2010, when this feature was implemented in Chrome, hijacking was a business model that was discussed in regular meetings. I recall one hijacker trying to sell themselves to the company that was 'complaining' about the hijacks.

"Buy us out and we'll stop, and you can use the tech on your customers?!?"

One of the boldest business proposals I've been party to. After a few deep breaths and some laughter, the offer was not taken. But that wasn't a one-off event. Spent a lot of time in early 2010's directly trying to protect customers from this stuff. Still do, but it's getting much harder with TLS-everywhere, HSTS, DOH, and many other things. Not impossible though, we can never let up on the pressure to keep the ROI too low for hijacking. The various network operators and ISPs that let these companies put racks in their data-centers to inspect user traffic should be <<insert_your_own_horrible_idea_here>>.


oh wow, I remember that and how it broke so many scripts and processes. It's what some of these crappy ISP DNS servers do, except for the entire .com/.net TLDs around the planet.


For me, the kicker: if I'm reading it correctly, over 40% of DNS traffic to the root server they examined is just diagnostic probes from Google Chrome being used to spot malicious DNS servers.


We got hit by this issue in March when our remotr users increased 5+ times and the DNS traffic going through our VPNs was causing a headache to our DNS servers. We pinpointed this to tis Chrome functionality, which includes also other chromium based browers like new Edge, and we had to deploy a relevant GPO to disable this functionality. Some background, I'm talking about ~200+k remote users. Also while in the office the load is distributed in tenths of DNS servers, when on VPN only a fraction of those are used. Furthermore if I remember correctly this "feature" in chrome was enabled in a version which was distributed to our clients maybe a month before the lockdowns so there was little time to see the effect while clients were still in the office


In a corporate / enterprise network where the DNS servers are Windows servers (domain controllers, in my experience, most of the time), the best thing you can do is stand up a few instances of <insert favorite DNS server here>, running on Linux, set them up as slaves for your internal zones, and point your users at those servers instead of your Windows servers.


You can also use stub zones to forward traffic for a single subdomain to your AD servers, while the other dns server handles recursive queries to the internet.


The last time I saw DNS throughput or performance issues was around 2003 on a network with 200K desktops and servers. That was 17 years ago, and they don't have a problem any more, despite growing in footprint to nearly half a million client machines.

I struggle to understand how DNS can possibly be a performance issue in 2020. In most corporate environments, the "working set" of a typical DNS servers will fit in the L3 cache of the CPU, or even the L2 cache.

The amount of network traffic involved is similarly miniscule. If all 200K of your client machines sent 100 requests per second, each 100 bytes in size, all of those to just one server, that adds up to a paltry 2 Gbps.

If your DNS servers are struggling with that, get better servers. Or networks. Or IT engineers.


Fairly certain this feature is more then ten years old.


https://cloud.google.com/docs/chrome-enterprise/policies/?po...

        Google Chrome (Linux, Mac, Windows) since version 80
        Google Chrome OS (Google Chrome OS) since version 80
Chrome 80: February 4, 2020

and as a clarification

When you connect via VPN to the corporate network the DNS queries are not distributes as when you are in the office. You have a X amount of entry points for the VPN which are served by Y DNS servers which is less than the total amount of DNS servers available in the corporate network. Plus the amount of remote users increased vastly, plus the VPN technology used plus the DNS servers used. Not that simple I'm afraid


The article mentions relevant code changes in 2014. It seems like the enterprise policy may be recent, but the feature is much older.


Sure and I explained why it hit us when it hit us

Also keep in mind that Edge is chromium based now and has the same issue. And is becoming the standard by MS and thus the impact is increased now because of this


Sure, lockdown and increased VPN use makes sense as to why this got painful in march. However I expect GP was quibbling with this part of your statement:

>Furthermore if I remember correctly this "feature" in chrome was enabled in a version which was distributed to our clients maybe a month before the lockdowns so there was little time to see the effect while clients were still in the office

Which claims that the feature was rolled out recently.


Fair enough, but I wrote if I remember correctly which obsiously I didn't and confused when we got hit by it with the actual implementation


The article has a nice graph that shows when the feature was introduced - 2010.


Just thinking for a few seconds, I can think of a number of ways to not repeatedly spam DNS servers while still accomplishing the objective. If you were to send 3 random queries to Google every time you opened your browser, you would quickly get hellbanned behind a recaptcha.

Not saying that we should embark on some quest for retribution against Google. It's just sad.


I bet they just made this as a throwaway and it worked fine when it was 1% of traffic and maybe they just haven't looked at it again. I bet if the APNIC people harassed them they would change it.


Google could, you know, use their own DNS servers for this...


No, they couldn't. The whole purpose of these probe requests is to assess whether the DNS server used by a particular client is acting normally (responding with NXDOMAIN if a domain does not exist), so these must bde sent to the DNS server of the client, which effectively means that unless this DNS server performs the hijacking that is to be detected, they will inevitably end up on a root DNS server, because no server in the hierarchy will know those domains.

Forcing these probe requests onto Google's DNS would completely defy their purpose in the first place.


Instead of http://asdoguhwrouyh, they could probe something like http://asdoguhwrouyh.google or anything else in a zone owned by them, so the uncachable traffic would hit only their authoritative name servers and not the root servers.


But then a lying DNS server could easily identify those, and NOT lie about http://*.google -- the reason these requests are entirely random domain names is so they're not easily recognized as probes.


Except that the queries are already totally identifiable as probes in their current form, which is demonstrated in the article.


... only when the delegation for google. is cached.


ah, good point.


It wouldn't usually help to use 8.8.8.8, but they probably could use their own authoritative servers instead of the root servers. Look up <random chars>.dnstest.google.com or <random chars>.dev or something.

The problem with this is, of course, that a malicious resolver could detect this and NXDOMAIN those queries, while passing others through. I don't see what the incentive would be for ISPs to do that, but ISPs are weird.


> that a malicious resolver could detect this

I assume the reason for changing from a 10 char random string to a 7-14 char random string was exactly because some ISP's were detecting it...


Unfortunately the commit message doesn't explain why the change was made:

https://chromium.googlesource.com/experimental/chromium/src/...


@agl?? You here? Do you remember the motivation for this change?


They could just use DNSSEC...


DNSSEC has to be supported by your ISP’s DNS server, which they won’t if they’re trying to intercept your queries.


They're suggesting you install your own local DNS server --- nothing prevents you from doing that, and just talking straight to the roots instead of through your ISP's DNS server.

The real problem is not your ISP, but rather the fact that the most important sites on the Internet have rejected DNSSEC and aren't signed. DNSSEC can't do anything for you with hostnames in zones that haven't been signed by their operators, and, to a first approximation, every zone managed by a serious security team (with a tiny number of exceptions like Cloud Flare, who sells DNSSEC services) has declined to do so.


Do any ISPs intercept upstream requests when running your own recursive resolver? If not, DNSSEC isn’t relevant here and you should be fine just fine with “only” running your own without requiring DNSSEC.


They could. I doubt they do.


I'm sure anyone here who has set up a PiHole ad-blocking DNS server at home has run into these random domain requests and wondered what was going on. At first I thought one of my devices had a virus on it or something until I did a few searches and discovered it was Chrome being ludicrous. (Next topic: Getting Chrome to actually use the DNS provider that you specify and nothing else...)


I recently just blocked port 53 in my firewall completely, for that exact reason. I use an internal DNS server the forwards to an DOH upstream server. No more rogue devices trying to use their own dns, at least until they all switch to DOH too


I also blocked port 53 in my firewall (except for the Pihole; no DoH there). After that, I noticed that some applications have some DNS servers hard-coded. 8.8.8.8 being pretty prominent.

My solution was to assign the Pihole the IP address 8.8.8.8 as well. Then I added a static route in at the router to route 8.8.8.8 to the Pihole. Now every request to dns.google will also be handled by pihole instead of getting timeouts.


> No more rogue devices trying to use their own dns, at least until they all switch to DOH too

nice that you already debunked your thesis


Would these have occurred on a sever that has unbound as its upstream?


Why wouldn't they?

Chrome doesn't care what DNS server software is in use (even if it could figure that out), it cares whether it's behaving properly or not.


It would be interesting to have an estimate of the energy consumed (globally) by this Chrome/Chromium feature...


I'm curious why you think this might be significant. All global root server traffic amounts to < 1gbps. Under contrived conditions you could easily serve it all from a single laptop computer, but even if we assume that realistically it's being served by a large, distributed collection of servers each drawing ~250 W continuously and each housed in one of those ridiculous corporate datacenters with a PUE over 2.0, you're still looking at a global energy cost comparable to one tankful of motor fuel per day, or much less than the energy used by one single commercial airplane.


You can see the code online through the CS browser - https://source.chromium.org/chromium/chromium/src/+/master:c...


Fallout from the ISPs effort to hijack failed DNS queries.


On macOS you can block these with the excellent product Little Snitch.

I've got several rules for Google Chrome in Little Snitch that seem to do the trick. Deny outgoing UDP connections, and Deny outgoing TCP connections to port 80 for the IP addresses and domain for my ISP. You can see these if you monitor traffic.


It seems like they could rotate these much less frequently to let caches work. It seems that these are random to avoid DNS servers hardcoding a response for them. However they could be pseudo random based on the current day, month or release so that it would be hard enough to intercept them (unless the DNS server was really committed to doing this, but there are other ways to achieve this) while still allowing a lot of caching.

I think the only downside is that you would leak some information about your system clock.


> It seems that these are random to avoid DNS servers hardcoding a response for them. However they could be pseudo random based on [the current date and browser release]

That would still allow ISPs to compute the limited number of domains for which NXDOMAIN would need to be sent at any given point in time.

(Whether they'd do it is another story. The random pattern currently used by Chrome looks like it may still be easily detectable at the DNS-recursor level, so maybe the ISPs really don't bother beyond the simple NXDOMAIN -> portal domain replacement.)


As I said, if they make specific effort they will succeed. The current scheme can be broken by returning a number of different IPs instead of one or two. I think my proposal has a nice balance between making ISPs put in non-trivial effort and not putting a lot of load on the root servers.


This is a classic arms race. The hijackers back off for a while, but as is always the case in low-margin, low-regulation, low-consequence environments, bad actors will present a way to skim a tiny value out a massive amount of transactions. Give a percentage of that to the network operator, and take the rest home.

The network operators enable this behavior. It would be next to impossible for it to be useful (ROI wise) if they didn't intentionally support it with access to their networks. It doesn't need to be an arms race, but we refuse to regulate or punish anyone in this space. We waste massive amounts of resources detecting and counteracting the hijacking services. The human (developer) cost is where the big waste is here, not electricity.

and the fight goes on....


I'm curious to know how much data the root namespace servers put out in terms of gbps, but this doesn't seem to be public information.


https://root-servers.org/

Select a root server at the bottom. Some, but not all, have a "statistics" link. Seems to be stated in qps and message size distribution, but you should be able to derive traffic volume from that.


Assuming the mean is close to the median, they are reporting ~10B requests daily with a median response size of around 1KB. 10TB daily is a little under 1Gpbs. Traffic is spiky, but this isn't particularly complex once you consider they have multiple data centers/servers. Of course, I may have misread something as daily that was hourly or something like that...


So it looks like the root namespace providers output a totally reasonable amount of traffic. Divided between the hundreds of points of presence globally, this is tens of megabits per physical host.

This FAQ is illuminating: https://www.verisign.com/en_US/domain-names/internet-resolut...

The servers themselves are ordinary 1 RU physical rack mount servers with 1 Gbps or 10 Gbps Ethernet. Nothing special. I'm guessing that most of the load isn't from the root, e.g.: "j.root-servers.net", but from hosting the authoritative DNS servers for .com and .net (b.gtld-servers.net) on the same box. That would surely have more traffic and much more data.


The root servers only serve the root, root-servers.net, and .arpa, and they are starting to move .arpa elsewhere - https://lists.dns-oarc.net/pipermail/dns-operations/2020-Aug...


Reasonable quantity of traffic, but they have to be very reliable.


No need for reliability when there is 26 way failover...


I havent looked in a good 5 years, but “not much.” There were roots running off of 1gbs links back then. A quick loot at the sibling comment root stats and Im swagging a few hundred thousand tps at about 256 bytes per response. The problem is/was mostly in distributing and sinking inbound garbage packets.


You can look at the RSSAC002 data as well. It doesn't count bandwidth, but it counts lots of other stuff.

https://github.com/rssac-caucus/RSSAC002-data

You can also click on the "RSSAC" button on root-servers.org to get the YAML straight from the root server operators themselves.

Most of the root server operators have anycast instances deployed in organizations that host the servers for them. So there's not an easy way to measure bandwidth utilization because many root server anycast instances are hosted in organizations that may not, or could not, report that bandwidth utilization. Look at the map on root-servers.org to see how dispersed around the world these things are.


ungoogled-chromium[1] and Bromite[2] have had a patch to disable this for a while now

[1] https://github.com/Eloston/ungoogled-chromium/blob/14fb2b0/p...

[2] https://github.com/bromite/bromite/blob/410fc50/build/patche...


I can't get past the `size_t i` rather than `int i` in the first loop. Why. I suppose it is some type of defensive programming.


Bit flip changes an int to a large negative value. Now you're stuck doing a signed comparison for a while.


Why does Chrome (Google) need to know whether DNS is being intercepted? What actions does Google take based on the answer?

Note that under this crude test of sending queries for unregistered domains, a user who administers their own DNS could be indistingushiable from "DNS interception" by an ISP or other third party.

I administer my own DNS. I do not use third party DNS. These random queries would just hit my own DNS servers, not the root servers.


From article:

> Users on such networks might be shown the “did you mean” infobar on every single-term search. To work around this, Chromium needs to know if it can trust the network to provide non-intercepted DNS responses.

Don't know if this is the sole reason.


I think you are right.

Reminds me of the story behind "Google Public DNS". Back in 2008/2009, OpenDNS was hijacking "queries" (NXDOMAIN) typed in the address bar to their own search page ("OpenDNS Guide", or some such) on an opendns.com subdomain. In response, Google launched its own open resolver.^1 (OpenDNS was later acquired by Cisco)

1. http://umbrella.cisco.com/blog/opendns-google-dns


In my mind it's a good enough reason to justify trying to fix it.


No, the point is that in combined address and search bar you don't know whether something is a (local) domain or a search query. You can recognize known TLDs, but that's it.

Guess what Google' priorities were when they approached that problem.


Since these are root queries, wouldn’t your DNS server need to hit the root servers to ensure the TLDs don’t exist? Also your own DNS won’t be detected as DNS interception unless you replace NXDOMAINs with fake responses.


If I am using a localhost cache I serve my own custom copy of root.zone. Currently I am not using a cache; I have split DNS with several authoritative servers and I pre-fetch DNS data in bulk from DOT/DOH servers.

If I serve fake responses does that turn off searching via the address bar?

Why doesn't Chromium just have a setting that allows a user to turn off the incessant queries for nonexistant names.


I think I've understood the most of the article but I missed the initial part. Why is there a probe in Chrome that uses DNS to query random 7-15 character long hostnames, only to get NXDOMAIN and burden the root nameservers? What does this probe achieve?


Some DNS providers (like ISPs) will hijack NXDOMAINs and redirect you to ads or stuff like that. Chrome wants to detect that.


There was a point where, at least in the US, this was standard behaviour for virtually every single major ISP and mobile provider. Several used to hijack all port 53 traffic to disallow you from using anything but their resolver.


And for those who don't understand why this is a bad thing, I will present my own use case. I run pi-hole at home and frequently work from there for another company. That company has provided me with a laptop that uses Cisco's DNS "Umbrella", which is some sort of security feature: https://docs.umbrella.com/deployment-umbrella/docs/point-you... Because my company laptop doesn't pay attention to the DNS servers recommended by DHCP, and ignores the local domain search TLD, if I try to ssh into a machine on my local network (without a FQDN) from the company laptop, it replaces the local search domain with the corporate domain, then does the lookup, and gets an A record from Umbrella that is not on my local network. It makes the ssh connection and (surprisingly) reaches an ssh server, which asks me for my password. The login fails, and my password (in plain text) could very well have been harvested by the ssh server on the catchall host. Now you are going to tell me that I shouldn't use ssh passwords, and should instead be using RSA keys for ssh. Regardless of what the NSA tells you, THIS IS ALWAYS A BAD IDEA because once any account is compromised, ALL OTHER ACCOUNTS with locally stored keys ARE ALSO COMPROMISED.

Sorry for the rant, but wildcard catchall DNS is a REALLY BAD THING.


> THIS IS ALWAYS A BAD IDEA because once any account is compromised, ALL OTHER ACCOUNTS with locally stored keys ARE ALSO COMPROMISED.

This is not universally true. If you generate separate private keys for each server-client pair, compromising one private key will limit the damage to just the one server.


That is just not true. It may be the case if the key itself is compromised, but consider that you may have many different accounts scattered on different servers. Once one of them is compromised, the attacker now has access to every other account because they are all chained together.


Can you describe the attack scenario you're imagining in a bit more detail? Because that doesn't sound possible to me.


Yeah, the argument you are making about all keys being compromised doesn't make sense. You are leaving out a key assumption in your setup, and without it is not possible (for us) to accept the chained compromise you are describing.


User managed passwords aren’t ideal. If you’re looking for more security and you’re concerned about compromise of local keys, you could purchase a couple of yubikeys (or similar), or you could use an SSH CA (Hashicorp vault and Step come to mind). However, if you’re very concerned about storing creds on a company laptop, or compromising your passwords by logging into a honeypot server (which known_hosts should be protecting you from), you ought to be much more scared of your company keylogging you...


> should instead be using RSA keys for ssh.

No, you should be using ed25519 keys

> THIS IS ALWAYS A BAD IDEA because once any account is compromised, ALL OTHER ACCOUNTS with locally stored keys ARE ALSO COMPROMISED.

Not if you use passphrases on the key, generally together with an ssh agent, which is the best practice


In that case you should get a host key differing message (or not present) at least.


Heh Umbrella is such a shitty app, if you've got root you can

cd /opt/cisco/anyconnect/bin

sudo sh umbrella_uninstall.sh

That should leave VPN functionality intact, but remove the ridiculous MITM of DNS queries.


Actually I would tell you to use ed25519 keys instead of passwords lol


My personal thought is that you shouldn't be connecting to anything personal or local from a work provided, likely heavily keylogged, device.


Yea. They want to detect those so they can send you to their search page with their ads! How generous.


Why is the C++ code labelled to be coming from some file .c?


Why on earth is there someone with shell access to the DNS root zone and running tcpdump?


how would they maintain the root servers and correct issues without shell access or tcpdump? Make blind guesses and restart the server until the problem goes away (it won't)?

No matter how high-profile the environment, eventually, the rubber will hit the road and some human will be in a privileged position to be able to fix a problem.

That is true for every single service out there. Yes. Including Gmail. Including AWS. Including Twitter. Everywhere.

Depending on size and profile of the service it's more or less people in need of jumping through more or less hoops to get there, but this must be true for any service.

Always keep this in mind when you make the decision to move your data to a cloud service.


Why is a server with a problem still part of the root zone? And no, this is absolutely not the case for serious operators. Access to production systems is highly regulated.


Yes, highly regulated access with lots of hoop jumping, that's what they said. And there exists a person who has jumped through all the hoops and has that access. And that hoop jumping person ran tcpdump on the root server.


I don't want to make this a personal attack, but it really sounds like you haven't done much work in a real production environment in a high-sec company. There may be a lot of red tape and safeguards in place, but you will always have someone with access to do anything, anywhere. It's the only way to respond to "interesting" incidents.


OK, so say you remove it, and the problem goes away. Now what do you do? How do you find out what was actually going on?


How do you remove it?


This study used the DITL “day in the life” DNS data collection exercise https://www.dns-oarc.net/oarc/data/ditl which is formally organized regular research activity.


The worst thing is, this will not even detect a well written NXDOMAIN interceptor that only hijacks requests to valid top level domains.

It's about time for DNSSEC to be available on all TLDs and for browsers to nag if it is broken.


As I wrote above: DNSSEC can't do anything about unsigned zones, and the overwhelming majority of zones in both the North American Internet and in popular domain lists like the Moz 500 are unsigned, and will remain unsigned, despite almost a decade of pleading from DNSSEC advocates to recant.

What's crazy about this is that there's a trivial solution to forged NXDOMAIN responses that people can adopt immediately: just DoH to a provider that doesn't forge NXDOMAIN responses (none of the major providers do).

I sometimes wonder whether the vehemence of the anti-DoH advocacy is rooted in concern that it will cause DNSSEC to lose yet another potential motivating use case.


I've never looked at DOH as an attack on DNSSEC, though I suppose you could. I think the resistance is more about the big corporate and the Internet level DNS operators like Google's 8.8.8.8, they want to be able to manipulate DNS responses when necessary. I know, evil corporate IT Ops hijacking my HNN connection. No, not that.

Think about a coordinated effort by top tier DNS providers globally to stop a giant bot network by simultaneously 'hijacking' DNS responses for the command and control server host-names. In classic DNS this is easy, just intercept the requests at the LDNS provider and return a dummy server IP, all good.

That falls apart with DOH and DNSSEC. With DNSSEC you cannot forge a response to a client that strictly expects signed responses for a particular zone. And with DOH, the various corporate IT shops cannot inspect and 'hijack' the responses. Though, the DOH operator can still change the response. But that moves the capability outside of local corporate IT and into a multinational company that might not agree with your request to 'fix' a problem via assisted DNS hijacking.

So all of these new, safer DNS delivery methods do legitimately impact the ability of "good"* operators to protect the Internet. Is the trade off worth it to protect users DNS traffic versus being able to respond to threats? I think that protecting users daily traffic is net-net better as it is a steady state problem and state sponsored actors have the resources to subvert a population via DNS. But I also feel the loss of a tool to protect users at the same time. Things like this are never zero-sum.

Disclaimer: I work for Microsoft and although I don't operate DNS services as part of my job, I have spent a lot of time on this particular topic over the years. These are my opinions, not the companies. I welcome challenges to my opinions, that's how I learn.

*"good" is always a situational thing.


Losing the ability to do this very specific mitigation seems a tiny price to pay for not having everybody's DNS requests have zilch for transit privacy and integrity all the time.


DNSSEC allows a recursive DNS server to absorb these Google Chrome junk queries: the resolver can use a secure proof of nonexistence to answer the junk query from cache. Much more efficient, and works to absorb junk traffic in any domain signed with NSEC, not just the root. https://tools.ietf.org/html/rfc8198 https://www.potaroo.net/ispcol/2019-04/root.html


> hijacks requests to valid top level domains

I believe the purpose of this feature is not about detecting hijack requests to valid top level domains. In other words, a well written NXDOMAIN interceptor would not cause a harm to their intended audience, so they didn't bother trying to detect it.

It's about detecting that a "eng-wiki A aa.bb.cc.dd" record it just received from the user's DNS server is actually intended to be eng-wiki served from corp network instead of a stupid ISP page.


This comment, and another one mentioning DNSSEC has been downvoted.

Please explain why you hate DNSSEC instead of downvoting things you disagree with.


Browser vendors seem to have shelved all work on DNSSEC for reasons they haven't publically stated. It had such promises to be able to reduce trust in CA's by pinning HTTPS certificates to DNS responses, so was exactly what browsers would have wanted, yet still all work stopped around 2015 or so.

To me, it's as if DNSSEC has some critical and unfixable security vulnerability, and people who make these decisions decided to stop all work on it, but not reveal the vulnerability because doing so would do too much damage.

This is probably the most comprehensive list of reasons not to use it: https://www.imperialviolet.org/2015/01/17/notdane.html


This is not what DNSSEC does. DNSSEC is about signing DNS records to prevent spoofing of records.

DNSSEC has not seen widespread adoption because of complexity in implementing and maintaining DNSSEC and concern over weaknesses in the encryption chosen. The protocol has been around a long time now and the encryption involved is not modern. Some DNS providers are making DNSSEC easier by handling keysigning and rotation with no user involvement. Just check the box that you want it activated.

DANE is the storage and retrieval of certificates via DNS. DANE depends upon DNSSEC to sign the certificate records to determine that haven't been spoofed.

There is no great conspiracy theory needed regarding why DANE has not been implemented in browsers. The browser makers have been pretty open about why they have chosen not to support it. For example code has been written for Chrome but Google has said they haven't shipped it because they don't want to support the 1024-bit RSA required as part of the DNSSEC standard.


I don't understand why browser vendors should be involved

My browser should ask my OS to resolve DNS

It's my OS's responsibility to do that - maybe sending a request to a remote server, maybe running it's own resolved, maybe using DoT, DoH, DNSSec or not

What business should it be of browser vendors?


This weird idea of the delineation of responsibilities between the OS and applications has never actually been how things worked. For a very long time, the OS resolver libraries couldn't even reliably do asynchronous resolution, and applications like your browser provided their own. I think the notion that the OS "owns" DNS comes from the fact that people used to get their DNS servers from DHCP, and the OS's libraries automatically knew what servers DHCP had been configured. It's not like some kind of law of system design.


The OS resolves DNS names to IP addresses... Except an IP address isn't a security identifier of any kind, so there is no benefit to it not being spoofed.

The relation to browser vendors is that DNSSEC allows DNS to verify/validate certificates for TLS connections, which can be used by web browsers (and other applications, but web browsers would be the main users).


Shouldn't it be up to my OS to do that validation though, not the browser? After all when I ssh to my.server.com I want the same guarentee as when I https to it.


That’s not how SSH or HTTP work.

If you ssh to a server your OS will resolve the IP, but your SSH client will request and attempt to verify the server key. Same with browsers and HTTP.


No, browsers won't do anything of the sort with HTTP. Or ftp. Or SMTP.

HTTPS sure. SMTP when they start a TLS session sure, FTPs too, there's protections at a higher level to ensure MITM isn't working, and you could probably argue that's a reason that DNSSec isn't required at all - after all get the wrong IP and your secure application won't do anything past the initial handshake. That's still leaking information though.

DNSSec, if used, is something that should sit in my DNS resolver, which should be part of my OS, not in my browser or ssh client - after all I might not want to use DNS, I might want to use a different protocol for resolving address lookups.


In an alternate universe, an application would call connect('google.com', secure=true), and the OS would make an encrypted connection and verify that the person on the other end of the connection really was google.com using dnssec published keys.

While that might have been a better design, the reality is OS's only provide API's for unencrypted connections and each application builds their own encryption and authentication on top of that.


> In an alternate universe, an application would call connect('google.com', secure=true), and the OS would make an encrypted connection

IBM's mainframe operating system z/OS (formerly known as MVS) has a feature called AT-TLS (Application Transparent TLS).

With AT-TLS, you can configure operating system policies to intercept TCP socket calls from an application, and automatically add TLS to the sockets. That way, some legacy app, which knows nothing about TLS, can have TLS support added to it, without any modifications required.

There is an IOCTL that can be called on the sockets, which can find out whether AT-TLS is enabled, what certificate is being used, etc. So applications can easily be enhanced to detect whether AT-TLS is enabled on a connection and respond differently.

https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.4.0/...


In reality my browser says "connect to www.google.com on port 80", the the library eventually calling like getaddrinfo to translate the domain name to an IP address.

My OS resolves www.google.com to 123.45.67.8. It's the OS's responsibility to resolve the DNS, not the browser.


That blog post is five years old, and most of the things it lists are now moot.


True, but dnssec is still going nowhere...

The future seems to be HTTPS with domain-validated certificates over insecure DNS, or even dnssec but doing the http challenge over an insecure network.

Great for state actors to inject malware into any site...


I don't get this feature. And I really hate that it's present in pretty much every browser these days. If I want to type an URL, I'll use the address bar. If I want to search, I'll use the search bar. Different bars with different keyboard shortcuts and different purposes. Why do so many browsers merge these two? Screens are insanely wide these days, so screen real estate can't be the reason. Are we trying to trick users into thinking that URLs aren't a thing anymore?

Maybe this "omnibox" doesn't know whether I want to enter a hostname or a search term, but I do.


Firefox's omnibox is probably my favourite feature, it's so much better than separate bars. The reason is that it also searches within your bookmarks and history for items matching the terms you enter. At least half the time when I use the location bar, I am looking for a site I have been to before or bookmarked, and it will pop up as the first result in the suggestions. If nothing is found, it makes sense to fall back to a web search a lot of the time. How would this work with separate bars? Surely the history should be searched when typing in the location bar, but then you couldn't fall back to searching the web. Also, when looking for that site in my history, I might be typing part of the URL or part of the page title, so it is ambiguous whether it's a URL or search. Even if I am typing part of the URL, if it turns out nothing is found in my history, it might make sense to continue with the same terms as a web search instead of me trying to remember or form a whole URL from memory.

Chrome's implementation is terrible as it's designed to just funnel you into Google search, and doesn't give you a good idea of how useful it can be.


I agree that that's the least painful implementation. But last time I tried FF mobile, the behaviour had changed and it was giving me search suggestions (as in actively contacting a search engine as I was typing), with no way to turn it off. That's a deal breaker for me.

If I want to search in just my history, I can simply press CTRL-H and type to search in history.


all technically deprived people I have seen, use none of the above, the first thing they do is open google (most have it nowadays in defult tab page) and they type the URLs given to them there, no one knows about address/search bar, and if some know, they expect it to work like the google search box

Update: I hide some site of mine from google, I do not care about SE traffic and block any crawler, and I have noticed when I give some people I know a URL from that site, they often tell me that it cannot be found, given google search they use all the time does not list it


So many people do this. They'll open the browser and start typing google.com in the omnibar. Even though it says "Search or enter address" right there.

If the less tech savvy can't figure out that it does both, then maybe it isn't as intuitive as the browser builders think.


I've seen other weird behavior, even from people with a fair bit of computer experience. Techie instinct is not the right basis for making UI decisions.


Isn't that just a matter of educating users? Users, contrary to popular belief, are not stupid. Everyone starts off knowing nothing at all, but we can teach them what an address is and we can teach them how to search for things.


> Users, contrary to popular belief, are not stupid.

"I don't care about learning, I just want it to work."

"Why do I care again?"

"Just make it do it right."

"Can you come fix the CPU again?" (Speaking about PC, with the difference explained multiple times previously.)

"Can you come fix the computer again?" (Same person speaking about monitor. Difference also explained previously.)

These are all actual quotes from Friends/Family/Co-workers


Ignorance isn't stupidity. But it may be classified as stupid to purposefully remain ignorant so idk.

But I do see your point.


> Ignorance isn't stupidity. But it may be classified as stupid to purposefully remain ignorant

Exactly what I was trying to say. I have no issues if someone is ignorant about something, and is actually trying to learn, even unsuccessfully.

But if someone won't even try, they get no pity from me.


It most certainly is. Current “startup culture” (or should we say “swindle culture”?) values illiteracy of the masses because everyone's dream is, in simple terms, to screw millions (or billions) of fools over and make a lot of money from it. No wonder the path of adopting to ignorance, downgrading to lower level is chosen instead of the path of education and lifting people higher.

A user who doesn't know what address bar is and types all the things into search engine benefits Google. Therefore, you won't see any changes in Chrome.

Obviously, it's more general topic than bashing the usual IT evils. People take reading and writing for granted, just like they take having electricity and water supply for granted, but it doesn't just magically happen. There is an enormous continuing work. Remember the '80s talks about teaching kids using and programming computers because it was ESSENTIAL FOR THE FUTURE? What has happened? Computers haven't got simpler at all. You still need to teach how to use them, but today it's not a fashionable topic, and everyone pretends it's not their problem. The result of a disparate, self-maintained education is — who would've thought — uneducated people. In addition, the “educated” “specialists” treat users as if they are on a tropical plantation in a cork hat: “Those damn brutes can't learn to do anything properly! Can't argue with nature, stick to whips and simple tasks.”

It is important to remember that the radiance of modern IT sphere has little to do with Jobs' iphone presentation and whatnot. Without old simple-hearted initiatives, long forgotten BASIC listings in hobbyist journals, government programs on educational computers, and local electronics clubs a lot of people would not work there. Everyone was stupid once, there is no exception to that. The focus should be on the process of learning, not the state of being stupid.


I think the ratio of people who know stuff stays roughly the same.


On FF it’s Ctrl+k for searching (by prepending `? `) and Ctrl+l for addresses (if it can be parsed as an address). I almost never just click into the bar, so that works fine for me.


This works in Chrome too


I just tried this, and it doesn't do as I'd hoped.

ctrl l does not force Chrome to resolve the term through DNS, it may still run a Google search. If you want http://mylocalserver you have to include the http:// or Chrome will Google for mylocalserver.


> Screens are insanely wide these days

This is not relevant for URL or search bars since they need to be displayed horizontally. Separate bars means less vertical screen space, which is still scarce.


My search bar is next to my URL bar, both are big enough and it has no impact on vertical space. I don't think I've seen any browsers where the two bars are stacked vertically.


See this: https://commons.wikimedia.org/wiki/File:Firefox_1.0.png

In firefox preferences there still is an option to switch to old layout with two bars. Sadly it does exactly the same thing as adding search bar manually, that is gives you second redundant search bar and does not turn off search support in the original url bar.


Oh yes, I remember that. I don’t miss it at all, save for the annoying behavior of single bar when a search contains dot (e.g. searching for a dotnet namespace).

While the single bar is my preference I agree that more choice should be given to the end user. Google is trying to hard to shovel the changes that benefit them than addressing user’s needs (obvious example being AMP).


Insanely wide means that you can have two inputs on the same row without vertical impact. That's how it used to be.

(I think the omnibox is the right UI though)


> This is not relevant for URL or search bars since they need to be displayed horizontally

Need? Has anyone tried?


I've seen it, back around 1999, possibly konqueror? Something that let you drag around toolbars and if you moved the address bar to the left/right side it would change the direction of writing.

Let's say that testing it briefly was enough. Editing tilted text works up to around 45 degrees, steeper than that is a strain.


Good luck reading anything in latin script with a one-character wide vertical search bar. This would works for Chinese (that’s the traditional writing orientation) but definitely not for most other languages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: