Considering a subdomain "trivial" is ridiculous... there's a difference between "www.example.com" and "example.com". Not only can they serve different sites, they can even have different DNS records!
It seems that "m." is also considered a trivial subdomain. So when a user clicks a link to a "m.facebook.com" uri, they'll be confused why FB looks different when the browser reports it's on "facebook.com".
This is certainly subverting the domain name system. I can't see the value or gain in security by this.
(If you want to put focus on the domain, then display the host-part with less contrast, i.e. grey, but don't hide any potentially vital information. Otherwise, put out a RFC, defining "www" as a substitute for "*", or a zero-value atom, in order to guarantee consistent behavior.)
Edit: There are also legal concerns with catch-all domains in some countries. Blending the lines certainly doesn't help.
The domain name system has been around for decades and it's a clever and proven system. It can – and should be – taught in school and, arguably, knowledge of it is, while not difficult to obtain, essential in our times. Additional ambiguity in this is probably not what we want.
Arguably, the most sincere problems arise from mixed alphabets with Unicode domains and look-alike characters/glyphs. This could be addressed by a) going back to codepages (Unicode subranges) and defining a valid subset for each range, and b) enforcing a domain name (hostname and domain) to be in a single codepage. Clients should derive the codepage by the Unicode range and generate a codepage-identifier, which may be displayed as a badge, identifying the respective range. And, of course, any mixed domains should be regarded illegal and invalid. (We may even want to make this codepage-identifier a mandatory particle of any URI, preceding the hostname.)
> Arguably, the most sincere problems arise from mixed alphabets with Unicode domains and look-alike characters/glyphs.
No way. The most sincere problem is that hostnames do not enforce any binding to a real world identity that users can understand (nobody inspects certs) and that the most trustworthy component of a hostname is the second to the last section (right before ".com"). Humans tend to look at the front of the URL, making "www.bank.evil.com" a mind bogglingly effective phishing technique.
Homoglyphs are almost always a sign of bad behavior and can just be banned to a large degree. The fact that "foo.com" or "foo.evil.com" are not necessarily owned by company foo is much worse.
Regarding the parsing of URLs, this is a common, but mostly counterintuitive argument: Take for example names in most western countries, addresses (street-zip-city-country), etc. Most of our most important identifiers work this way.
Regarding lacking binding of identity: On the other hand, this has been one of the most important features of the web, from the very beginning. Also, there is no way to setup a system, which will attribute to a single person in a readable and intuitive way. (E.g., names fail to do so.) Arguably, this should be left to (optional) extensions.
I'd argue, the knowledge required to parse a URI safely may be conveyed in couple of minutes. Why not enforce this knowledge? Why not have a URL-parsing note on the start screen of any browser? Why dumb down the system and introduce ambiguity – and by this even more insecurity – instead of educating users? URL-parsing is a vital skill, which can be acquired in less time than memorizing a basic table of partial addition results. Why do we still try to teach addition, if we can't teach URLs?
You're in good company. Tim Berners-Less said something similar when reflecting on what he would do differently if given the chance:
> Looking back on 15 years or so of development of the Web is there anything you would do differently given the chance?
> "I would have skipped on the double slash - there’s no need for it. Also I would have put the domain name in the reverse order - in order of size so, for example, the BCS address would read: http:/uk.org.bcs/members. The last two terms of this example could both be servers if necessary."
Yes. Think how you have to read "mobile.tasty.noodles.com/italian/fettuccine" to determine where it goes.
First start at the slash and work your way left: "com -> noodles -> tasty -> mobile" - then jump back to the slash and work your way right: "italian -> fettuccine".
This is counterintuitive and I doubt most users understand it. "com.noodles.tasty.mobile/italian/fettuccine" makes more sense to me.
Also, I think TLDs like "com" and "edu" and now "io" and "cool", etc, are misguided. I wish we had "country.language" as the only TLDs. For instance, "us.en.apple.www/mac". I see several advantages.
One, if "us.en.apple" and "uk.en.apple" were different entities, it would make legal sense, whereas "apple.com" and "apple.cool" being different entities makes no sense. Two, a use would likely notice if they ventured outside their usual TLD(s), and be less surprised by the different entity. Three, these TLDs could have rules about allowed characters; eg, only ASCII in "us.en". This would make homoglpyh attacks much more difficult.
I'm not so convinced. There would be still "uk.co.bbc" and "com.bbc" pointing to the same body behind it and any kind of confusion arising from this, like, "is 'ug.co.bbc' the same?" The most important part is teaching users that the identity isn't just "bbc" with some extra decoration. Also, we have the reverse example in software packaging (com/example/disruptiveLibrary) and it isn't fool-proof either (especially, if you only know of "disruptiveLibrary" and not about its origin).
I don't see how you could enforce a hostname binding to some real world identity. Hostnames really need to have a non-ambiguous mapping from a name to a computer (more or less), but real world entities don't have that, without using really cumbersome indentifiers. Many natural persons share a name, so how do we decide who gets a hostname based on that; the same is true of corporate persons -- there are many that share names. Even if there was a way to disambiguate these things, it seems unlikely that the entity in charge of this would also want to rub a public registry -- so how do you make that work.
Absolutely. People don't have a single coherent model of identity in the real world. It's hard to glue certs to a pile of sand. Did I buy lunch from "Tim Horton's", from "The TDL Group Corp." or from some random company with an address for a name? The answer is yes to all three, despite only buying one lunch.
On a higher level, all those considerations are about a single question: Is the Web about communication (then it's probably OK as it is), or about a viable business platform with an entry-level as low as it can be?
Who's without interest may throw the first stone, er, browser extension.
Both techniques are deceptively effective; I know the following might be only anecdotally relevant, but it's the most recent case of a successful phishing attack I know of:
Recently a friend of mine didn't see the lower dot on the 'e' in a URL [0], and promptly ended up inadvertently broadcasting messages to everyone on her WhatsApp contacts list.
How about displaying an identicon, that is rendered from the domain, in the address bar? People might soon learn what the icons of their important sites look like and will easily detect if somebody is trying to phish their bank account.
The space of easily visually distinguishable images has a certain size. Let's assume there's a deterministic, pseudorandom mapping from domains to images. For a given domain, how many plausible impostor domains are there? What's the chance that there's at least one impostor domain that happens to get the same image?
If you have 1000 distinct images, but a given domain has 5 letters that could each be replaced with any of 3 visually identical Unicode characters, then, well, the chances are very high that there exists a plausible impostor domain with the same image. I don't think this is a very workable approach.
Any fake-Facebook website can copy Facebook's favicon, so that wouldn't add any security at all.
An identicon is a hash value represented as an icon. "facebook.com", for instance, may hash to a red image with a yellow line through it. While you wouldn't remember the icon initially, over time you would – or at least your subconsious would. If you ever visisted a fake-Facebook, you'd immediately notice that something was wrong if the icon suddenly was green with a blue dot in it, for instance.
Not sure if serious, but no. Anyone can copy a favicon; the point of an identicon is that it's generated from the domain name, so subverting it would require an attacker to find a hash collision with a visually similar domain.
Sorry, I mistook "that is rendered from the domain" for "rendered from a resource from the domain".
However, teach users to read domain names! If users do not grasp the general concept, e.g., if the supposed identity is just "example" (possibly with some decoration considered insignificant) and not "example.com", how are they supposed to survive? Domains have been around for more than a quarter of a century, the Internet is actually part of our lives… There is no excuse, and there is no sense in pretending that there was no harm in not understanding the basics. That said, there are real ambiguities that have to be addressed.
Which would be called computer literacy. I find it both awesome and terrifying that people can successfully do jobs that require working on a computer and still be computer illiterates. Awesome in a sense that it illustrates how good computer interfaces actually are and terrifying in a sense that in any profession with heavy machinery a person with a solution "adjust switches and dials until something happens" would be told to immediately vacate the place for safety reasons.
We do teach kids not to talk to strangers in the street and consider it quite important, I think. What's so much different about teaching them how not to get robbed on-line? It's not about being a good consumer, but about minding your own feet.
Yeah, cause moving from a text domain with no collision possible to some sort of collision prone visual system to ensure are able to understand the domain they are viewing seems like a great idea.
FFS, if users cant see that somedomainname.com is different than somedomanname.com how does a randomized image of the domain name based on a hash solve this.
How is the identicon designed such that its difficult to spoof?
I know of at least one site which users a user selected image in the login screen to thwart phishing attempts. Because its user selected its memorable, I think more so than a password for example. It would be hard for a scammer to spoof as well because they don't know the image the user selected when they created the account.
Unfortunately this would probably be less notable and thus memorable if everyone did it.
>enforcing a domain name (hostname and domain) to be in a single codepage
this is essentially what is already implemented in most browsers. You can't mix characters from different scripts in a domain name, except for special cases (e.g. japanese and latin are frequently used together and have little potential for confusion)
The reason why Google is doing this is because they are slowly trying to do away with URLs, as direct traffic is probably their greatest untapped segment.
Google is trying to get users to go through their doorway pages, which is exactly the kind of thing for which they penalize publishers.
Pay attention to when you enter direct addresses, let's say from a device/media subscription authorization page. The autosuggestion feature will often recommend Google searches, disguised as URLs, instead of helping you complete the very obvious URL.
If they help you get to the site directly, the opportunity to acquire your page views diminishes.
These behaviors are hostile toward users. I'd like to see further in their playbook to depreciate the URL as we know it.
Notably, what is the common answer to a system regarded to be too complex to be handled on a general level, so that it may be considered a common risk? Authority (read, trusted man in the middle).
They might be changing how they want to display them, but "do away with" is unsupported by the article:
> But this will mean big changes in how and when Chrome displays URLs. We want to challenge how URLs should be displayed and question it as we’re figuring out the right way to convey identity.
"The focus right now, they say, is on identifying all the ways people use URLs to try to find an alternative that will enhance security and identity integrity on the web while also adding convenience for everyday tasks like sharing links on mobile devices."
My statement is clearly supported by the article. They paint a rosy picture of it, because this is a submarine piece, but they are definitely making moves against the url.
> My statement is clearly supported by the article.
You're ignoring a direct quote in favor of a Wired reporter paraphrase (one which mentions sharing links, no less). They cite an earlier effort, which was a display change. This issue is for a display change. None of this points to "trying to do away with URLs".
"I don’t know what this will look like, because it’s an active discussion in the team right now," says Parisa Tabriz, director of engineering at Chrome. "But I do know that whatever we propose is going to be controversial. That’s one of the challenges with a really old and open and sprawling platform. Change will be controversial whatever form it takes. But it’s important we do something, because everyone is unsatisfied by URLs. They kind of suck."
She's says it's important that they do something! GTFOH! Hands of our Internet!
The problem here is that they view Chrome as their platform. They have too much market share ala IE6. Instead of following and helping to shape standards, they are considering highjacking the project. Argh!!!!!
+1 Why do they need to change anything? Of course it’s going to be « controversial »!! What happened to RFQs?
Hidding the url scheme was the first step down this path of utter stupidity and I vividly remember the hostility and hubris of the Chrome team at the time.
We still have Firefox, but many times they just blindly follow suit.
It is worse than that for some users. I've seen actual users that type/paste real url's into google's search box in order to go to the site. They actually had no idea that the bar at the top of the browser that said "google" (since they/someone set their default homepage to google) was a place where they could delete "google.com" and type/paste the url they wanted to visit there instead to actually get to the site they wanted to visit.
You seem shocked at this with word usage like "actual users", "real url's" and "actually no idea"
But how are we to expect users to know any better until general technology literacy improves?
Many people can't tell you the difference between a modem, router, OS, browser, or website.
I remember years ago sitting down with my elderly grandmother trying to show her how to use a desktop...
We are too close to our work so everything is familiar and easy.
Even the concept of moving the mouse on a table to represent moving the mouse cursor on the screen is something we take for granted.
Tell someone who's never used a mouse before to double click something to open it. You have to start way back earlier at the concept of which physical button on the mouse to use.
This turned more into a general rant about how we overestimate regular users but I'ts been on my mind for awhile.
> "Many people can't tell you the difference between a modem, router, OS, browser, or website."
They don't care, nor should they. How many people know how many spark plugs are in their car?
You're correct. We, the more tech-literate, take too much for granted; and most experiences and learning curves are too far over the head of the "average" user.
It's not about knowing how many spark plugs are in their car. It's more about buying a car that comes with a custom power adapter plugged into the cigarette lighter, never realizing that you can plug your own accessories into the cigarette lighter instead of buying your phone charger or GPS from the car company, and then not caring when they just take away the cigarette lighter and replace it with their own custom port.
Since we know all analogies breakdown under close inspection, I'm pushing the idea that the best analogy is actually a brief description of the event / idea itself.
So in this case:
Not display www. in the address bar is actually a whole lot like not display www. in the address bar.
And if anyone doesn't understand why that is a bad idea, maybe we should explain it to them, which might require using admittedly imperfect analogies that they can nonetheless understand.
The benefit is obvious in that instance. There is a very direct connection between checking your mirrors and not hitting a car as you merge or similar.
Where is the cause and effect for a URL or SSL cert? There is no learning experience.
Furthermore as some have claimed, and I've personally witnessed, for some URLS's literally dont exist. Just type whatever site you want into the google box and hope you get lucky.
I think the spark plugs example is an excellent one. People used to require an extensive knowledge of how cars worked in order to have a prayer of using them effectively. Now they don't, because we realized none of that knowledge is necessary if you design the system correctly.
We have enough historical context to realize that things like parsing URLs by eye is unsafe for the general population, and always will be. The solution is to engineer that need out of existence.
You might want to consider that manufacturers have added blind spot detectors to cars as people are bad at changing lanes safely, even with all the training in the world.
when did you have to know how spark plugs work to drive a car? And isn’t this why car mechanics exist? On the other hand you had to learn at some point what and RPM gauge is... And we still have it in cars even though you could say you don’t really need it.
my brand new one has a lot of gauges... so i’d say my point is still valid. And i find them extremely useful, cos you can make better use of fuel if you know what they mean.
Do you seriously not know how many spark plugs are in your car? It's the same as the number of cylinders. How could you not know that?
They absolutely should care. They should be aware that when they store things in "the cloud" they are not stored on their device and are visible to third parties. They should understand what encryption is and how to use it. "I don't know what I'm doing, and I didn't get the result I wanted, but it's not my fault it's the machine" is not an acceptable statement, whether we're talking about cars or computers.
I guess, the simile isn't entirely on the same level. You may not know how many cylinders there are in your car, like you may not know the number of cores in the CPU of your computer. They are both essentially hidden.
But you do know how many pedals there are in the car and probably, how many switches there are for the lights, and that the wiper has different steps of speed etc. You even manage to control these few elements, because they are the user facing elements your dealing with, the interface. There's no need to unify the pedals into a single one and to have the car to decide, whether it means accelerate, break, or clutch. Doing so would alienate you from the very task of driving, from what it means and what risks are involved. Taking these few controls away from you in favor of an ambiguous I-know-it-all-so-you-should't-care interface of ultimate convenience would probably not increase the security of operations.
On the other hand, we may expect you, as a driver, to know that there is a engine, that this is why the car moves, that it needs gas/petrol in order to run, that deacceleration is proportional to speed, etc.
Why is it so different with anything involving a computer? Is it, because we're telling them so?
I bring up in a previous reply that mirrors, and now lights, pedals, and other controls, that these are directly user-facing and must be interacted with in order to get anything done. Even knowing there is an engine that might need engine-y things like water and oil.
But where is the requirement a user knows about URLS in order to use the web?
Way back when we had AOL keywords. Now we have Google and apps and other tools that make URLS unnecessary.
My grandmother that I I mentioned before. She browses solely through bookmarks and via Google results. That a URL exists is not only an implementation detail but completely unneeded and unused in her case.
Then something like an SSL cert? Where it will work just fine without? I don't even want to imagine trying to explain that to my grandmother before sending her off to her decades old AOL mail inbox.
Only recently with Chrome displaying "Not Secure" have I even noticed any concern or interest amongst non-technical friends and aquaintances.
But why is it that computers are that magical? Computers have been now around for nearly 70 years. It's a technology as new as airplanes were in 1980. (If we include digital accounting machines with storage, they have been around even before the first flight of the Wright brothers, they are even older than any living person.) Computers are also the means by which many, if not most, are dealing with for a living on a daily basis. If we consider users generally as unfit to grasp even the basics, why is it that anyone is still admitted to their kitchen? (There are really dangerous, pointy objects there, which may cause real-life harm, and, if you have a gas oven, you may even blow up the house or the entire block. How could ordinary people tell a knife from a dish and how could we assume that they would know where they put them? Isn't it possible that someone just wanted to have a glass of water from the tap and blew up the house instead?)
Also, I consider some of this very US centric. In many parts of the world, AOL wasn't a big thing. In many languages, people are used to the fact that important parts of a sentence come at the very end, e.g., the verb, at least in some tenses. Moreover, most important identifiers go from the minor, less important part to the bigger, most significant ones. Why can't we tell users that domains work just like their post address? (As in "street-city-country". And there are even funny ones, like "street-city-state-country" and even funnier ones, like "c/o", meaning it's not the usual addressee. Why are people able to deal with this?) If you're living in a western country, even your own name works probably like this. Why this, oh, it's magic, don't care?
I'd say, it is mostly, because we encourage them not to care. Because we say, "Yes, that's really difficult", where we ought to say, "No, it's really simple and you ought to know." The user is still the person in charge. Pampering and flattering the person in charge into incompetence isn't apt to end well.
I'd say, there's a chance to convey simple things, like, the cloud is not on your local machine, or how a URL is principally constructed. Or that a file is saved only, when you safe a file.
Edit: Returning to the obligatory-car-simile, when I did my driver's test, I had to know the intrinsics of an engine, of the braking mechanism, of the steering. I was tested for knowledge of ad-hoc technical repair. It was assumed reasonable for a driver to grasp, to memorize the details, to minutely describe them, and it was even mandatory to do so in order to obtain a license. However, it was less important to drive a car then (you could do well without this in most occupations) than it is to operate a computer nowadays.
Edit 2: And, to level up a bit, how comes that academics are able to correctly cite a book and page, but are unable to parse a URL – and are even flattered for the latter?
Without prejudicing the rest of your points, computers are very unlike most inventions. Computation is extremely powerful, our only working definition for what it is even is relies on an intuition, called the Church-Turing thesis, that essentially says the computers are doing categorically the same thing we are, but doesn't purport explain why that's so. It looks observably true and that's the best we have.
So, it's entirely unfair to suppose that since people got used to having tap water and so we are surprised if a person can't operate a tap, therefore they should be used to the entire complexity of computation by now.
You definitely _should not_ count machines that aren't actually computers ("digital accounting machines with storage") since those aren't Church-Turing, they're just another trivial machine like a calculator. Instead, compare the other working example we have of full-blown Church-Turing: Humans. Why aren't people somehow used to everything about people yet? People have been around a long time too. Why isn't everyone prepared for every idiosyncratic or even nonsensical behaviour from other people, they've surely had long enough right?
I find this interesting. The parallels up to this point. My intent is not to pick fun on anyone but to just relook at the conversation just had.
We're talking about users not understanding the technology they use daily.
jwalton, in trying to give an example with spark plugs, allowed a more knowledgeable user or practitioner, mirimir, to give a more technically-correct description.
It seems to echo the main problem we are discussing in which users of a technology are not the same as those who design or know the nitty-gritty details of that technology.
Assumptions learned from day to day use in that technology (all cylinders have one plug, the google box is the only box I need) can so easily be proven incorrect when speaking to an actual expert in that field.
But it's arguably not such a great example, because details of engine design are generally trivial for drivers. Maybe a better example is the low oil pressure indicator. Maybe most people don't know what that actually means, but not having one can lead to severe engine damage. Years ago, I had a car with an oil radiator, and the oil line failed. So I knew to stop immediately.
I'm occasionally doing that and especially suggest non-technical users to do exactly this thing. I can mistype URL. Google will correct me, if site is well-known. Otherwise I'm risking to go to phishing website.
I just finished helping out a friend who did exactly this thing, clicked on an ad at the results page thinking it was Google's top result and was redirected to an ESTA scam site where they lost a bunch of money.
What's easier to tell apart for nontechnical users? URL bar from Google search field or ads from Google results?
Here in Austria, we had a rather problematic court ruling regarding this. Following to this and to common recommendations catch-all domains were mostly disabled, at least, you run them at your own risk.
What it was about: Say, there was a review or best-price-search site (here, "service.at"), using catch-all and mapping subdomain requests to product searches. So "acme.service.at" would be remapped to, say, "service.at/search?q=acme". Now Acme sued, claiming anything containing the name "acme" on the web ought to point to their site, including the subdomain "acme.search.at", since they were the owner of the name "Acme". To almost everybody's surprise the court decided that this was true, according to naming rights, and that a subdomain containing this name, even if just implemented virtually by a catch-all mechanism, was an infringement. This also implies that "acme.example.at", which is included in the set of "*.example.at", mapped to the very same as just "example.at" is a possible infringement. – Strange, but this is as it is. And, yes, it's particularly about search engines, like Google.
(I really don't remember the particulars, since this has been some years ago by now, but we may assume that the results returned by the service weren't exactly favorable and that the particular search enjoyed a higher Page rank than the site of this vendor, or at least a rank, which brought it up near the site of the vendor in search results.)
IANAL, and I'm just speculating here, but the law could have been termed as "copyright laws apply to domain names on the internet" (i.e you can't use the name of a brand you don't own in a domain name), and acme.service.at is a domain name, but service.at/search?q=acme is not.
I think this is exactly right. All the letters of the name are important and cannot be left off. If people want to equate www.domain to domain they can put a 301 redirect on the www address but the browser has no business making assumptions about what the owner of the name space thinks are equivalent.
Once upon a time, back in the middle 1990s when it was a major WWW browser, Netscape Navigator assumed that it could wrap domain names in URLs within "www." and ".com".
"www." was used as a way of delineating what was a web address. Hence the fashion of putting that there so people knew you had to do it in the browser. Before then people used to also put the "http://" on there, and the combination of the two on vehicles/signs was ridiculous.
We're now in a web world. People know what a URL is. "domain.com" isn't ambiguous, it's obvious to man, beast or child that you type it in the browser. Most decent websites revert "www." or without to whichever is the canonical version; the one without should be that tbf.
The 'm.' is ridiculous too and ruins shareability. If the link was the bare domain, and the frontend does any switch that's needed, we'd all be better off.
There is an actual (small) reason for the existence of "www" nowadays. You cannot have a CNAME record for the domain apex (example.com). Many dns providers implement a workaround by resolving the CNAME record into A/AAAA records when queried.
You can but it prevents you from having other records there, which includes things like an MX record. Just a nitpick, as in practice that prevents most people from being able to use a CNAME on the apex.
I upvoted you because you're correct - it's a failing of the system.
My original point still stands though - we used to use 'http://' and 'http://www.' as a signal that this was a web address. I cannot believe this will still stand in 5 years time. The default is now the domain name, not the phone number.
"www" was not a marketing trick, it was legitimately a different domain, by convention. General users never understood it, so companies started to have to add it to match their weird expectations.
To associate a base domain with a company identity happens to be true MOST of the time, but isnt actually true. Plus, foo.example.com follows different security rules than bar.example.com (CORS, certs, etc)
The problem here is that the precise domain has a technical meaning...but consumers are using it for a different meaning. Once that is also useful BUT NOT THE SAME.
Pretending the url matches this new meaning (and altering the display to match) serves both groups poorly.
There was never a requirement for www. to be anything other than the bare domain for most people. It became useful because it was synonymous with being a web thing right back when people hardly knew what the web was. This was serendipity, which turned out not to be serendipitous when people had to write it on signs / read it out on an advert etc.
I see no reason now to associate www. with the web version of your service. If I receive a request on port 80 or 443 for the bare domain, what's a better option than service the 99% of people who want a webpage?
> It became useful because it was synonymous with being a web thing right back when people hardly knew what the web was
You're missing some history here (or we're talking past one another) Back then ( source: lived through it) subdomains for particular protocols were pretty common (www.example.com, ftp.example.com, gopher.example.com, mail.example.com) were pretty common, though not a requirement at all. Almost all the users were technical, so this helped users AND admins. Plus, machines were FAR less powerful back then, so anything exposed to the "public" probably didn't want to handle multiple purposes anyway.
Then non-technical users came in, saw "www.example.com" being used many places, and assumed it was part of the system. New domains either created a "www" subdomain or lost traffic (until browsers started trying to compensate). Note that what we're discussing is a switch in behavior. Prior to what the article is discussing, a browser would try the domain as typed, and if it failed would try prepending "www" AND ADD IT.
> I see no reason now to associate www. with the web version of your service
First, you still have people that type the "www" automatically because they never learned that was technically incorrect.
Second, what if you're reselling subdomains? The concept of "base domain == identity" is relatively recent and possibly temporary.
Third, what if you don't HAVE a single "web version of your service"?
The internet (and the web) has succeeded (granted, half by accident) by providing loose rules so practices can evolve inside those rules. If we start encoding the current practices in the rules, the rules no longer handle evolution well (or possibly at all).
I'm splitting hairs because hairs sometimes matter.
Did you look at the bug report? It's rife with valid examples of why this behavior is wrong on Chrome's part. For example, when "www" isn't the first subdomain, Chrome still elides it.
This thread contains many examples of professional technologists who don't know what a URL is. You, for example, don't seem to know what a URL is.
I guarantee you, most non-tech people don't know what a URL is. People know what links are, to the extent that they can click/tap on them to get to some thing, or copy them and share/email them. That's not the same as knowing what they are.
You're making a dangerous assumption about what people know, including yourself.
But that's because it's .com. Now, there are too many gTLDs, and companies will build their brand around their use of .io, .me, .cs, .es, etc. Just the other day I saw a link that caught my eye to studio.zeldman, and I had to take a moment to hover over the link to see if that was some new branded gTLD.
Nitpicking: .io, .me, and .es are ccTLD (respectively British Indian Ocean Territory, Montenegro, and Spain) and have been around for at least a decade. .cs was a ccTLD for Czechoslovakia.
Ahem. "www" comes from the times, when you had to have a dedicated machine, or, at least, a dedicated network interface, for each service. Hence, you had an FTP server, creatively named "ftp", and your WWW service ran on a host surprisingly named "www". A concept similar to well-known addresses.
No, it didn't. Some separated services that way, but it was never any more necessary than it is now. Source: used to run an ISP through the early days of the web, and collocated services on the same host all the time.
The idea of the commenter I replied to that you 'had to' have a separate host or interface for each service is flat out false.
When people split it, it was over capacity or manageability concerns, but often we also set up separate hostnames for different services just because it was what people expected; often it pointed to the same hosts.
The point was that this was always a choice - there has never been a point where it was required. My first ISP back in '93 ran mail, web, ftp and shell accounts on a single pc. So did the ISP I cofounded in 95. It isn't and never has been a technical limitation, but a choice down to what worked for you. Especially as address rewriting firewalls also existed back then, so multiple services pointing to the same external IP in no way implied they had to be the same physical host.
For us (early regional ISP, mid-'90s), a lack of separate per-service hostnames caused significant scaling fragility.
In the initial rollout, all services were served from a single physical host with just one listening IP, which the bare 'example.net' resolved to. (Was this naive of us? You bet.) Other service hostnames (www., smtp., etc) were all just either CNAMEs to that hostname, or A records to that IP.
When our SMTP usage started to exceed the capacity of that single host, we tried to move 'smtp.example.net' to a different host. This is when we we discovered that many users were configured to use 'example.net' for SMTP instead. We had to update all of those users' configs before we could turn down SMTP on the original host. (We couldn't afford big-iron load balancers, and they were less common then - we just used DNS round-robin for load distribution).
At that point, we realized that customers were using bare "example.net" for everything - homepage, SMTP, POP3, IMAP, FTP, DNS, shell access - you name it. It was easy to remember - and it worked. So it was hard-coded everywhere - FTP scripts, non-dynamic DNS settings, etc. And this was looong before email clients had automatic configuration detection, so that was all hard-coded, too.
So we had to painfully track down all the users who were still hitting 'example.net' for SMTP, and help them update their configs before we could turn down SMTP on the original ancient host. The other services had to go through a similar painful transition.
We concluded that the only way to prevent this from happening again was to make sure that the bare hostname never offered any services at all - except for a single HTTP service whose sole purpose was to redirect 'example.net' to 'www.example.net'.
From then on, each new vISP domain had the same non-overlapping service namespace ... so that the otherwise inevitable configuration drift would be impossible.
Later, with the rise of things like email autoconfiguration, load balancers, and POP/IMAP multiplexors (like 'smunge'), we had more options. But at the time, avoiding services on 'example.net' was the only way to go (for us). Having a bare 'example.com' as the sole hostname in the browser bar was a sign of brokenness. :)
I wasn't claiming it was a technical limitation or a requirement, just that the time where this happened certainly did exist. Choice or not, the time existed. That was my point. Fair enough.
I disagree about the 'm.domain' convention; it's good and useful. I like the ability to retrieve a mobile site on my desktop, and vice versa. Sometimes I'll be on a site that's difficult to read on mobile, and speculatively try the 'm.domain' - often it will work. When the site itself tries to autodetect what device I'm using, it often makes a poor decision that is not subject to appeal.
People know what a URL is. But this issue demonstrates misunderstandings of URLs, as "www.x.y" is not necessarily an official "x.y" page.
"www" == "web address", or "m. is ridiculous" which are annoying fashions (agreed there!), but that has literally zero impact on the security characteristics, implying yet again that people do not understand URLs.
---
No. This comment is a perfect example of why this is not safe to do. It's throwing open the door to abuse.
What makes you think that http requests are the only thing domains are used for? Mentioned elsewhere in this thread, Active Directory requires that the A record for the bare domain be pointed to the PDC.
If you're serving email from example.com, for reputation reasons you should have the bare domain's A record pointed at your primary MX.
I understand that here on HN we're focused around web-based companies, but for every other corporation, there is a plethora of other services served out of a domain -- of which web/www traffic is maybe 10%, if not less. Everything from email to voip to directory services to vpns to crazy internal apps all rely on the corp's domain/domains, and you definitely should not be pointing your bare domain at your web server (which, chances are, is some contractor-built page living on GoDaddy completely outside your own infrastructure).
In a typical company, you'd have some server serving example.com doing some or all of the above. It would then be running a light http server which accepts requests on 80/443 and permanent-redirects them to www.example.com.
Good point - but even my mum (65 and not very good with phones or computers) will just bash the bare domain into her browser.If it's got "www." she'll use that, if not she won't. She still knows it's a web thing, the www. for her and everybody else is superfluous.
Things change. We've had something like 30-odd years of URLs. The people who can't deal with this are vanishingly small, and those that can't are likely not your target market; or they're the sort that'll just consider Facebook to be the web.
I'm not disagreeing with your point btw that some people can't deal with this - all I disagree is the extent.
That may be true, and while I wholeheartedly disagree with this change, those that are younger than the web are the next Shepard's of the web.
We're going to see more and more changes that the "old folks of the internet" are going to hate. Some, or even many, of these changes will actually be good changes. We shouldn't prejudice on age.
Again, to be clear, I think this particular change is horribly broken.
Older people care more about consistency than "usability". They can successfully complete long tasks, maybe with several retries but they can, but if they are consistent. Input a long text somewhere, dial 15-20 digit phone number etc. But they can't usually deal with unpredictable situations, where computer will "intelligently" help them, fill parts of the input, when same action is different on different devices, when they need to know how system will behave in advance.
PS: consider how you would guide an older person over the phone when he/she is accessing Citibank website (for which www.citibank is different website), with Chrome will "intelligently" hide essential part of the address.
Agree to that - my experience exactly. Very recent example. My father's using Skype to talk with family. They refreshed UI, he got new version installed and that was it. I've had to answer multiple questions what does this and that button do. "The same dad, it just looks a little bit different." What I got as an answer was "It's placed somewhere else. It is so confusing. Couldn't they just left everything were it's always been..." Thankfully I have remote access to his desktop so I can guide him around in situations like this.
And I am quite sure removing 'www' on the address bar's domains is as confusing.
Some of them are great. Some of them do incomprehensible shit like loading text in chunks while scrolling, making it impossible to scroll to the end of an article without waiting 5-10 seconds for it to appear.
(This might be a result of my using a content blocker to block mobile ads, but the fact that it’s even possible infuriates me as a user. I mean, it’s text! Just show me the text!)
I agree with you 100% that responsive design isn’t there yet.
I'm in the "basic usually better" camp too, but this doesn't matter, as we're all in agreement here! We want to know whether we're on the "better" or "worse" version of the site, for whatever each of us mean by "better" or "worse".
Ehh this might have been true 10+ years ago, but most people are more savvy than that today.. I think you are describing something that is less and less common (which may explain why Google is trying to encourage people to keep doing it).
Yes, but I suspect it is far more common (I have no evidence to supply however) than the grandparent comment appears to imply.
We (the HN crowd) can easily get caught in a 'bubble' where because we known the details, and those with whom we typically associate also know the details, that we extrapolate those observations to conclude that "most people" know the details.
But until one's been in a situation of providing support or training for a diverse user group, one does not see just how little technical knowledge the "average joe" (a set of which we the HN crowd are very much not a member of) has of these things. The "average joe"'s level of technical knowledge is astonishingly low compared to the HN crowd's level of the same.
Yahoo Search still shows a second search box under the first search result if you search for Google, to capture users that search for Google then type in their actual search term, even when they're already on a search engine.
They may not make up a huge proportion of users, but they still make up a huge number of people in actual terms.
For example, with Active Directory, the DNS A record for your foo.com domain must resolve to your domain controllers. Your www.foo.com will resolve to a separate non-domain controller web server.
I think a lot of the commenters here are thinking solely in terms of commercial web services such as twitter.com and such, but there's so much more to the wider landscape.
Thinking about it that way gives me conflicted feelings. Much as I hate what Google has done here I also feel like any organization stupid enough to use their public domain name for their Active Directory domain name deserves every little pain they receive for it.
You lack the compassion that comes with experience.
My $dayjob has our AD root domain the same as our public root domain. Because we implemented AD in the year 2000, and this was Microsoft’s recommendation for domain naming way back then.
And if you use Exchange, you can’t rename your AD domain, you have to rebuild your forest and migrate piecemeal. So we’re stuck with it.
The practice of using Corp.example.com did not evolve until many years after Windows 2000 and Exchange 2000 were in the wild.
So we run http redirectors on each of our domain controllers to send traffic to www.
This one is kind of a "religious" topic for me, I guess. I'm sorry that it is, but it makes me exceedingly defensive.
I trained on Active Directory (AD) with a group of veteran sysadmins in 1999. I don't have access to the "Microsoft Official Curriculum" book from my class in '99 (long-since thrown away), but I have a distinct memory of a lively conversation in class re: the pitfalls of using a public domain name as an AD domain name (or, worse yet, a Forest Root domain name) during the class. It was very evident to our group of veteran sysadmins that using a public domain name in AD would create silly make-work scenarios (like installing IIS on every DC just to run redirect visitors to "www.example.com"-- just as you describe, albeit IIS didn't natively support sending redirects at the time).
I'd go further and suggest that anybody with a modicum of familiarity with DNS knows having multiple roots-of-authority for a single domain name is a bad idea. Microsoft not supporting split-horizon in their DNS server (like BIND does with 'views') compounded the difficulties with such a scenario in an all-Windows environment.
I certainly wouldn't argue that Microsoft has given exclusively good recommendations for AD domain names in the past (evidence ".local" in Windows Small Business Server), but I am reasonably certain that their documentation always suggested that using a subdomain of a public domain name was a supported and workable option.
I started deploying AD in 2000. I've deployed roughly 50 forests in different enterprises, and I've never used a public domain name as an AD domain name. I've domain-renamed all my subsequently-acquired Customers for whom it was an option (which it was, so long as they had not yet installed Exchange 2007), and have been rebuilding the Forests of Customers who made the wrong decision in the past, where it makes economical sense.
Windows 2000 didn't support stub zones, however. At the time that Active Directory was new there wasn't a good way to do split-horizon DNS with the Windows DNS server.
As an aside: I really enjoy your writing about using SRV lookups. It makes me sad that SRV records aren't being as much as they could / should be.
I don’t know anything about AD, so this might be a stupid question: can you not just run a web server on the same host as the AD server or port forward all HTTP traffic to a different server?
A domain controller on the internal network might not be the right place to run a copy of the public-facing content HTTP server (which might be in a datacentre, or even managed and run by an outside party, and might not be served by IIS). Then there are considerations of firewalling rules, browser rules, anti-virus rules, and even DNS rules for machines on the internal network that access a public WWW site that DNS lookups map into non-public IP addresses. (To prevent certain forms of external attacks, system administrators have taken in recent years to preventing this very scenario from working by filtering DNS results.)
Having a disjoint DNS namespace (and the needless make-work that it creates) is the issue, more than running HTTP servers on all your DCs to do redirects. There is absolutely no practical advantage to running an Active Directory domain with a public DNS name. It's all downside. It has always been all downside, and anybody who had any experience with DNS could see that all the way back in the beta and RC releases of the product in 1999 and 2000.
http://pool.ntp.org/ takes me to an "It works!" default Apache 2 page for an Ubuntu installation. As the comment in the issue describes, http://pool.ntp.org/ takes you to a random ntp server.
If you want another example, try google.com using Google's own DNS:
>http://pool.ntp.org/ takes me to an "It works!" default Apache 2 page for an Ubuntu installation. As the comment in the issue describes, http://pool.ntp.org/ takes you to a random ntp server.
Either way, the ask was for a difference in www.example.com vs example.com. Not a difference in www.pool.example.com vs pool.example.com. In the latter case, the different subdomains will still be shown (AFAIK).
>Even if you ultimately end up at the same site through redirects, you're clearly not going to the same site initially.
Which is nothing that an end user is going to care about and doesn't provide an example to the asked question.
That is absolutely insane and someone should be fired and shamed for this. I didn't like just trimming a pure www. but trimming any www. in the hostname is just dumb behaviour.
How would I differentiate between loadbalancer1.www.intranet and loadbalancer1.intranet? THOSE ARE NOT THE SAME.
That's exactly right. www.pool.ntp.org is the project site. pool.ntp.org is for getting an NTP server. Which one you get will depend on your location and random chance. That server will run NTP, but what it happens to run on port 80, if anything, is up to the operator of the server.
A) Consider any sharing platforms where unrelated bodies coexist with distinct subdomains under a common root domain (e.g., Blogspot, Tumbler, etc) While "www" is probably a reserved name and mostly not of practical concern, "m" may be a practical issue.
B) Consider subdomains for test-purpose like "www.test.www.example.com" (now displayed as "test.example.com", which is actually not even the root of the specific subdomain).
C) Users unsure, if they are on the full-featured or a reduced mobile site, when "m" is hidden.
D) I may actually want to have a service agnostic default host at the root and subdomains for dedicated servers (like "www", "ftp", "mail", "stun", "voip", etc). Maybe this one just returns a short text message by design, if accessed on port 80. Not every domain is just about the WWW. (Edit: While we may assume that such a server would forward in practice, this may be assuming too much.)
>> there's a difference between "www.example.com" and "example.com"
> Can you link to a site where these two are different?
There are 3rd level domains where everyone can register "www.{TLD}". E.g., .com.kg, .net.kg, .org.kg. Look at the www.com.kg. It's also available as www.www.com.kg. Or www.org.kg that's in fact www.www.org.kg. If you display just the last part (com.kg, org.kg), does that mean that you're viewing the root website? Nope, that doesn't. That means that chrome is fucked up.
Someone mentioned www.citibank.com.sg vs citibank.com.sg in the issue.
One of my school's websites: I can't remember what it was and this was before I understood what the difference is, but www worked much better than without iirc.
This also applies to m.*, so literally any web-app with a mobile version.
I don't remember the site offhand, but I was going to one recently where example.com didn't even work, it was some weird error page -- you had to use www.example.com. If it comes to me, I'll post it.
Not really fixing it thou because they just strip the www part from the name. If the developer does not setup www.domain.com and the user goes there chrome will not “fix” anything.
I haven’t tested it but it will most likely show up as domain.com in the address bar and will result in an error show to the customer.
If chrome wants to strip www as it’s essentially the same domain.com they can submit an RFC and not just decide for everyone. Honestly I hope they start making more stupid decisions like this so ppl move to Firefox so we have more competition.
> If the developer does not setup www.domain.com and the user goes there chrome will not “fix” anything
Yup, that's on the developers. Hopefully this fix will make it so that it will be easier to setup DNS with just one domain instead of 2. Props to Chrome.
> It seems that "m." is also considered a trivial subdomain. So when a user clicks a link to a "m.facebook.com" uri, they'll be confused why FB looks different when the browser reports it's on "facebook.com".
Will they? I find it very unlikely that many users would even check the URL in the first place, let alone understand that m.foo and foo route to different places.
Lots of people saying this is for the benefit of non-technical users.
For me, this is a minor inconvenience, precisely because I'm technically capable/interested enough to handle the inconsistency.
But this kind of stuff (and I am speaking somewhat generally here) tends to frustrate me, precisely when I'm trying to educate or deal with a non-technical user in some capacity where it happens to matter. I can't just tell them, "that is the address of the page, and that will always lead to the exact same place if you type it fully and correctly, and that's that". Instead I have to get my head around what if they're using browser X or operating system Y, I have to ask on the phone first, "hang on, tell me what you see on your screen", I have to say to the lady who's eagerly sat in front of me with pen hovered above paper waiting for me to dictate how to do a thing in straightforward steps, "well it depends, first you have to check this thing, and if it's like this then you can do this but it might also be like that in which case it's a bit different, let me explain" - and this is usually the point at which the non-technical user gets tired and throws the book at me.
In short, I think consistency of information and process is usually much more understandable and useful to users of any level, than the dumb 'simplification' of this half-baked information-hiding.
Yeah, I think that consistency is greatly undervalued.
Grandma has no problem with technical details being shown, she just ignores them, she just knows that clicking on the button on the top left will go the the webmail and that she needs to click the big red button in order to write a new email. Change anything and she will get lost, click everywhere, and usually find the solution, but sometimes make a mess.
There are also security implications. I told her to be aware of any change, because it may imply a phishing attempt, or some malware. But how is it going to work if legitimate software always change. You are basically training them to stop thinking about what happens, which is terrible since thinking is the only thing that can protect them since they don't have the technical intuition most of us have.
Browser start screens with a large search box in the center haven't helped either. Some users do see no difference between the location-field and this search box. Some have even unlearned this. Arguably, it facilitates ignorance of the location, the significance of URLs and how they work. Reading a URL isn't witchcraft, it's just about three simple things. But dumbing things down towards convenience at the expense of consistency will not empower users.
(Surprisingly, ordinary people have been able to manually dial a phone or to parse a street address without the help of a map service in the past. It can't be that bad.)
I wholeheartedly agree with this. There are two sides of the debate now. One side says that the machines should be clever enough to guess the user's intention and go around of their mistakes, while the other side says that the machines should stay as dumb and square. The question is about who is handling the complexity of this world, and in my opinion it should be often left to the human, not to the machines.
The worse outcome may be, if there's an ambiguity involved and the smart system takes precedent, but occasionally happens to take the wrong route – and suppresses any feedback for the user to intervene or even notice. Which is much, where we have arrived by this. I guess, collateral damage has become a matter of everyday life.
Most comments assume that this is for solving user confusion, or security, or building a better URL scheme, et al.
It's not, that is all smokescreen.
As ivs wrote[1]
They are going to hide amp subdomain, so you don't know if you're looking at AMP or the actual destination. And then suddenly the whole world funnels through AMP.
And for that reason, it won't be reversed until people call them for what they are actually trying to do.
This should be the top comment. After this change, we are just one step away from using the browser's address bar only as a Google search box, and Google as the entire internet's gatekeeper. Google doesn't make money when you type the URL into your browser's address bar – it makes when you don't.
No, this Chrome update is about hiding the "amp." subdomain from the original URL. What Google wants to achieve, is to make it impossible for the average user to tell when the entire website is being served from Google Cache.
Google cache links aren't served from `amp.yoursite.com`, they're served from `cdn.ampproject.org`.
If you're visiting `amp.yoursite.com`, then the site _isn't_ being served from the Google cache.
Also "this Chrome update is about hiding the "amp." subdomain on the original site from the viewer" is patently false since this update _doesn't_ hide `amp.`; only `m.` and `www.`.
> Google cache links aren't served from `amp.yoursite.com`
That's not where things are going, according to your own source from the previous comment:
> Our approach uses one component of the emerging Web Packaging technologies—technologies that also support a range of other use cases. This component allows a publisher to sign an HTTP exchange (a request/response pair), which then allows a caching server to do the work of actually delivering that exchange to a browser. When the browser loads this “Signed Exchange”, it can show the original publisher’s web origin in the browser address bar because it can prove similar integrity and authenticity properties as a regular HTTPS connection.
So, the content will be served from Google Cache with the original publisher's URL in the address bar.
> this update _doesn't_ hide `amp.`; only `m.` and `www.`
It's Google, who decides what and when it wants to add to its browser's list of "trivial subdomains". Especially, when the websites with "amp." subdomains will become common.
Yes, once the Web Package Standard is finalized and implemented then AMP pages will indeed use the normal `amp.` URLs.
But at that point, what would be your concern with hiding `amp.`? That's no worse than hiding `m.`; it's just another subdomain which serves a different version of the same content. Heck, sites could serve their amp pages on `m.` domains if they wanted to; the actual subdomain they decide to use is irrelevant.
Seeing "amp." in the URL meant that it's not a "full version" of the site. Google wants to remove the separation for the end user, so that all publishers would serve their content through Google Cache. And that's a big concern to me, since it means, the entire web will be served from a single company's database.
> Seeing "amp." in the URL meant that it's not a "full version" of the site.
Yes, but once again that's no different from `m.`.
> And that's a big concern to me, since it means, the entire web will be served from a single company's database.
Are we talking about before or after the Web Package Standard is implemented here?
If before, then your concerns about the URL aren't applicable because `amp.` links aren't served from the Google cache (only `cdn.ampproject.org` links). If after, then the content isn't "served from a single company's database" anymore; it's served using a decentralized and open standard for cross-origin server push.
> If after, then the content isn't "served from a single company's database" anymore; it's served using a decentralized and open standard for cross-origin server push.
Does this mean that Google will no longer rank higher those, who implement AMP and serve through Google Cache, than those who don't?
> Based on what we learned from AMP, we now feel ready to take the next step and work to support more instant-loading content not based on AMP technology in areas of Google Search designed for this, like the Top Stories carousel. This content will need to follow a set of future web standards and meet a set of objective performance and user experience criteria to be eligible.
Furthermore, once the Web Package Standard is finalized, the "Google Cache" won't exist anymore, at least not in the same way it does now.
The Web Package Standard allows any web page which supports origin signed responses to be served via cross-origin server push from any server that supports HTTP/2. So Google will probably still cache and push pages via their own infrastructure when you visit those pages from your Google search results, but the actual content being served will be fully controlled by the original publisher and behave exactly as if your browser received the page directly from the publisher's server.
> So Google will probably still cache and push pages via their own infrastructure when you visit those pages from your Google search results
And that's what I mean by saying, that the entire web will be served from a single company's database, which already controls the browser and the search. You will be able to browse the web without ever leaving Google servers, and Google will be able to track your every interaction on the web.
This doesn't increase Google's ability to track you at all. If you click a link on a Google search results page they already know you visited that site; them serving the initial page load via a cross-origin server push changes nothing.
It also doesn't give them any more control over the web, since the page contents are still strictly controlled by the original publisher (and that's cryptographically enforced).
Google now only knows the first page I visit from its search results. After this update, Google will be able to follow me across the entire web, because it will the one who serves it to me. How is that not a concern?
Are you seriously claiming that the largest ad company in the world is interested in decentralizing the web? Its blog article you linked to yourself says, that the goal of this entire initiative is to increase the usage of AMP by "displaying better AMP URLs".
> After this update, Google will be able to follow me across the entire web, because it will the one who serves it to me.
That's not how it works. Only the initial page is loaded over cross-origin server push. After you actually navigate to that page you're no longer on Google's site (which is why the URL bar is able to show the domain of the site you just navigated to instead of still showing google.com), so obviously they don't have any enhanced ability to monitor what you do after that point.
> Are you seriously claiming that the largest ad company in the world is interested in decentralizing the web?
The general web is already decentralized. This is about decentralizing AMP. And yes, decentralizing AMP is exactly what Google is doing here.
> the goal of this entire initiative is to increase the usage of AMP by "displaying better AMP URLs"
Yes, and they're accomplishing that by pursuing the development of open W3C standards which can be used by anyone. Just like how offline storage on the web started as a feature enabled by [a proprietary plugin developed by Google (Google Gears)][1] until Google pursued the development of open standards to replace it: https://www.w3.org/TR/service-workers-1/ (Check out who the editors are on that draft.)
Google's been following this pattern for over a decade now. They start with a proprietary initiative, then use the lessons learned from that effort to develop open web standards that improve the web for everyone. (I can give maybe a dozen more examples if you still don't believe me.) There's no reason to think AMP will be any different in this regard, especially since Google has already made their intentions on this matter clear.
This and many other changes over a course of a short period of time have caused me to go to Firefox exclusively now. I heard Firefox is going to stop third party cookie tracking altogether. Why not give Google the big finger and use a different browser? Vote with your cold hard actions if you feel so strongly about something.
I switched to firefox a year ago. Its a little slower, but im a lot happier.
Ive been trying to degoogle as much as reasonable. I moved to fastmail as well. Still using an android, but would switch if a reasonable alternative that wasnt iphone came up. Im not paranoid or a privacy nut, just think google is too involved in my life.
Same here. Have you found any viable alternative to the Google Calendar? I'm at the point where I'm thinking about hosting a calendar project from GitHub myself.
Nextcloud, whether self-hosted or otherwise, works great! It's just WebDAV. You can get calendar, contacts, task, and note syncing, and it can even host your documents for reference management software like Zotero.
If you own a Samsung phone, the calendar app is good. I wonder if you can install those Samsung apps (which for some are just forks of unmaintained AOSP apps) on a regular Android if you somehow get the apk.
I was in exactly the same camp as you a year ago. Then I played with a hand-me-down iPhone 6s and couldn't believe how much more pleasant it was to use iOS than to use Android (Nougat at the time). Having owned an iPhone 3G and 5, my memories were of a restrictive OS and a dumb Siri but both have really has come along since. I made the switch and can't imagine going back to Android now.
Upvoted from Firefox. Only reason I use Chrome nowadays is when apps launch it directly (whereupon I strongly consider uninstalling them) or when work requires it (... which is utterly ridiculous, and very likely why our web rendering performance and consistency is utter trash).
Hangouts and GotoMeeting are the only things I open in Chrome. I'm also completely sold on Tree Style Tabs, and I don't think I could now live without...
I would love to, but Firefox just feels more clunky. Not sure what it is, but the scrolling doesn't feel native to me (MacOS, Magic Trackpad and Logitech Mouse)
Having to use both for supporting complex web apps, I can't really agree that FF is noticeably slower. Chrome does seem to have less silly bugs though, like quick searching doesn't find multi-select box text in FF. Works great in Chrome.
Regardless of FF's little quirks, I use it almost exclusively for personal stuff. I would rather deal with those types of things than the mentality Chrome brings to table.
I've been using Edge on Windows for at least a year now and I'm quite happy. Now that it supports plugins, I haven't fired up another browser for months now.
I use Firefox as my "at home" / private browser. However for work i unfortunately feel forced to continue using chrome. First I just really prefer the chrome devtools and i just can't seem top find an equivalent replacement for the "manage people"/multi user built ion function that chrome offers. I really wish Firefox had something similar...
the containers are great thanks for sharing, will definitely use those at home! You're right I should have emphasized the feel part of my comment since the dev tools especially are just a matter of preference. However I stand by my point that the profiles are definitely not on par with chrome, since there doesn't seem to be a way to have multiple profiles open at once.
I’m sure there are other ways to do this, including specifying a profile when you initially launch the browser, but you can enter about:profiles in the address bar to see a UI to manage the profile. One of the options is to launch that profile in a new browser.
If you just need a "personal" profile and a "work" profile, what I do as a workaround is to use normal firefox for personal and firefox developer edition for work. They are completely sandboxed from each other.
Firefox does similar things though. They hide the URL scheme by default. And subdomain are displayed in a more subtle colour than the rest of the domain.
Firefox hides the scheme IFF it is `http://`. It doesn't hide `https://`. Also the subdomain AND the path is slightly toned down. The net effect is precisely what Google tries to do and Apple has been doing (namely, only showing the second level domain) without actually hiding any information.
Have you tried Searx? Meta search engine, open source, multiple instances (domains) to choose from. Once configured to your particular needs, searx can prove very powerful.
Safari already hides "www.". In fact it hides everything except the root-level domain, e.g. "https://www.google.com/about/" shows just "[lock] google.com".
Firefox and Opera show the full domain but gray out everything in the entire URL except the root-level domain, so "www." is gray.
Just saying, de-emphasizing and hiding parts of the URL is clearly a trend. This isn't just a Google thing.
I thought that's how it worked too but it is not. 1 click shows the entire url and selects it. Still without "www.". Another click will show "www.".
Regardless of how you feel about the change, it does indeed hide 'www.' to the point where a power user could easily be fooled that it was the naked domain.
this is actually the main reason I cannot use Safari. It always boggled my mind that they made this decision.
For power users, they never look at the url unless they want information from it, in which case the `www` is valuable.
For low tech users, it can lead to straight up incomprehensible issues, like sites not rendering properly (think of a `m.*`).
The UI gains are so small, that part of the screen is never really looked at, but needs to be there, and typically has tons of horizontal room... I don't get it
Low-tech users don’t often understand that a difference could possibly exist between “m.” and “www.” at all.
However, if it shows the TLD, they can confirm it says “google.com”. Imagine they’re visiting a Paypal phishing link, to the domain:
www.paypal.com.www.com
The most important thing to show the user is “www.com”, because they’re expecting “paypal.com”. All the rest is nonessential for protecting users from bad actor sites.
Looking at the bug report, Chrome would actually show "www.paypal.com.www.com" as "paypal.com.com". At least Safari does the wrong thing the right way.
Personally, I always want to see the full URL. It's fine if part of the domain, the scheme etc. are grayed out to emphasize the second and top level domains, but don't omit elements that are necessary to fully identify the resource because the lowest common denominator may think that fishing.com/paypal.com is paypal.
Yep, I verified that bug as well, apparently they never planned for "www" being somewhere other than at the front of the domain name. Sounds like they already know, woot!
> Low-tech users don’t often understand that a difference could possibly exist between “m.” and “www.” at all.
They should. Children probably have difficulty with '6' vs. '9,' but they need to learn in order to use our number system. Likewise, users of the Internet need to learn the domain name system. Could there be better name systems? Sure. There could be better number systems, too, but this is what we have for now.
The general public does not perceive that difference, likely as a direct result of dot-com inventiveness with respect to domain names. Thanks to the stupidity of “m.” (WAP is dead) and “amp.” (WAP lives!) and the cuteness of “baredoma.in” (Silicon Valley represent) and the insanity of “www1034.www” (here’s looking at you, HP), we have spent the last decade on the web directly teaching non-tech users that what used to matter (“www”) no longer means anything at all, and they’ve listened.
This is not a feature. Make users understand this, don't hide it, make the main domain glowing green, wash out the rest, anything, but this trend of hiding complexity will only lead to severe undereducation on the topic, and, eventually, it will reach professionals as well, who also won't understand, what they should.
Reducing the displayed value from { "is_secure" YES/NO, "http/https" ARGH/WHAT, "full URL" GIBBERISH } to { "is_secure" YES/NO, "domain" AOL KEYWORD } improves my chances of defending against a phishing attack someday, as well as those of non-tech users.
Reducing information density is a critical component of automobile safety measures. Dashboards in cars just prior to the "screens everywhere" era have been boiled down to the essence of what's necessary for a human being to operate a vehicle safely and without putting others at risk: One bright line showing speed, one bright line showing engine speed, one bright lint showing fuel remaining, and a few multicolored status icons; and then, a central info display where any logic more complex than "push to show next value" requires parking the car.
You can still see the full URL by focusing the address bar with either a click or ⌘-L.
I think it makes sense for the default display to show the most security-relevant information (TLD, SLD, and presence + validity of the certificate) in the default display, while deferring the full display (incl. spurious or malicious information that might be in the full URL e.g. https://example.com/www/paypal/com/login) to a user request (click or shortcut).
That said, Chrome 69's decision to to hide /all/ instances of www in the domain is unconscionably bad.
> this is actually the main reason I cannot use Safari.
Then I have good news for you! If you go to Safari's preferences and select the Advanced tab, there's a checkbox called "Show full website address" that disables this behavior and shows the full URL in the search bar.
Safari on iOS barely has enough room to show the domain, let alone the full URL. Tapping on the URL bar will present the full URL in an editable/scrollable text field.
There is quite a bit confusion how it actually works, nobody seems to had a look at it. Chrome simply hides the subdomain if the url bar is not in edit mode, these parts are still accessible/editable and the http host behaves (and is recovered from history) as before. Copying the url works as expected.
It is still confusing for tech people, because we often need awareness where we are.
>Chrome simply hides the subdomain if the url bar is not in edit mode
Not so. You have to click the Omnibox twice.[1] Clicking on the Omnibox once puts you in a completely new state, "edit mode [with corrupted URL]," and clicking on the Omnibox again puts you in "edit mode [with correct URL]."
It seems like it would be confusing for anyone. Especially that it doesn't just remove the lowest level domain if it's www, but any part if it's www. So "www.paypal.www.com" would be displayed as "paypal.com". If that isn't great for phishing I don't know what is.
The UX of moving what I'm looking at instantly under click is very unpleasant too, www. and http and even the "secure" "not secure" banners is like 200px shifts.
Lots of google's UI is getting (or has) things shifting instantly under the pointer it's quite annoying. The new gmail design quick-tools are often in the way, unknown until you actually click. Hell calling on my phone shifts the speaker and keypad buttons over instantly when someone picks up. Often placing the person I just called on hold if they answer too fast.
Wasn't this a pretty well covered UX rule to not move shit around on users. Up there with don't use modes and the like.
I don't know if it's the same thing, but I'll bite: it's absolutely maddening to edit or select part of a URL.
Click in the address bar and the entire address is selected, then you click on any part of it to either select a part or to place your cursor in order to add to it, after which Chrome appears to first shift the entire URL to the right in order to show the protocol, then it places the cursor within the shifted URL under your pointer. This causes (attempted) selections to be established from some other place in the URL than the user intended.
This is about Google having a fundamental weakness in product management and UX, giving me the equivalent of a Windows Registry setting to change is not helping, practical as it may be.
Its weird too because chrome invented to UX when closing lots of tabs not to reflow the browser UI but keep placing their close X under the mouse until you move away. So its like they understood this once and have forgotten.
Those who are saying this change is to make things "easier" for certain non-literate users may be correct, but they should consider whether such a justification is desirable at all from a moral perspective.
No doubt all of us have at some point been forced to learn things that we did not find particularly useful or pleasant to learn at the time, but then later experienced a great "satisfaction of knowledge" when faced with a situation in which that knowledge became advantageous or even essential, and then proceeded to use it to better ourselves.
Imagine a world in which none of that learning took place; one in which you never have to think, everything you see and do automatically satisifies you and keeps you in a blissful state of ignorance. Who makes the decisions in that world; or rather, who can make those decisions? Who is in charge of your life? Not you.
Gradually reducing the motivation to learn, by making things "easy" and hiding/obfuscating anything that could be used as a starting point for more learning, makes for a population that won't think, won't learn, won't question or rebel. It makes them docile and easy to control.
Making statements like "ordinary users will never learn" is one thing, but explicitly making decisions to ensure that status quo is a horrible trend. It's quite a genius plan, and certainly used by organisations other than Google, but thoroughly disturbing.
I've said a few times before in the past: "knowledge is power --- they don't want you to have too much."
I'm ok with hiding "www.", but it also hides "m." which is sometimes very confusing (I once opened a m.facebook.com link and was very puzzled why it uses the mobile site when the URl bar just shows "facebook.com").
What you may be surprised to learn is that Chrome isn't just stripping "www." from the beginning of the subdomain. "subdomain.www.domain.com" displays as "subdomain.domain.com"
The www.com might not be easy to get, but there is probably a www name at another important TLD that can be purchased. And if you own that domain you can easily get SSL certificates too.
Sounds like Chrome could now be a phisher's best friend…
It appears that it only does it for anything subdomainish -- that is, not the first part after the TLD. I tested it against .nz which has a silly mix of .{co,govt,school}.nz second-level domains and directly registered example.nz domains and it always displays at least one "registered" bit.
Which is almost worse because it seems like people have put thought into this.
This entire change is a security risk - there is no guarantee that any subdomain has the same owner as the primary, and that's not even getting into subdomain hijacking.
No this is simply bad implementation. As has already been mention, for internet services that also have websites for those services the www. subdomain makes total sense.
Facebook aside, don't the majority of sites show you mobile version by looking at headers. Isn't that the whole reason the "Show desktop version" feature even exists in Chrome, to send the desktop header? Very few sites actually use www. vs m.
Clearly, in those cases, people aren't "confused" by the fact that they are seeing a mobile version on www., so other than the fact that you're used to Facebook specifically working this way, wouldn't you just send the Desktop header whenever you get mobile and want desktop?
Just speaking for myself, but often people will link to mobile Wikipedia pages, or mobile versions of other sites, and I'll look at the URL first to see if I can change it to a desktop version.
The fact that you have to even do that shows that the m. pattern is bad. The same url should be shareable without having to worry about such things. It should show as mobile on mobile and desktop on desktop, unless you specifically tell it not to.
> Isn't that the whole reason the "Show desktop version" feature even exists in Chrome, to send the desktop header?
This is indeed what it does; I wish it did more!
Specifically, when visiting responsive pages on a phone where the mobile-viewport-size layout is just 100% broken, I’d love if “Request Desktop Site” actually set the viewport to be that of a desktop browser, and then set a low CSS/viewport zoom level to compensate. I want the dual of what happens when I set the “simulate a phone of X size” option in Chrome’s inspector!
I have a bookmarklet to specifically add/edit the <meta> tag on the page to change the viewport width to 1200px.
Works in many cases, although there still are sites that break with this. I have seen sites that uses the value of window.innerWidth at load and never bother listening for changes in the width. I have seen sites that uses the presence of onTouchMove event to determine whether to use a mobile layout.
Modern websites use the screen width to show you mobile vs desktop view. But before CSS had good support for responsive web design everyone had to create a separate website for mobile and put it on m.website
I don't get what you mean, I'm talking about opening an "m.facebook.com" link on my desktop computer Chrome. So not sure why "Show desktop version" is relevant.
It's not about hiding www. or m. subdomains. It's about hiding amp. subdomain, and google is really invested into turning web into collection of amp pages on their own servers.
Looks like this is intentional. To change it back go to chrome://flags/#omnibox-ui-hide-steady-state-url-scheme-and-subdomains and disable the setting.
Additionally, if this flag ever goes away, the "kFormatUrlOmitTrivialSubdomains" is the internal flag for this, it seems[1], though its description says it's "Not in kFormatUrlOmitDefaults"[2].
Back when they removed the "http:" off of URLs, I used to use a hex editor to turn the kFormatUrlOmitHTTP bit flag off every time I got a new build, so I'd get the URL formatting I wanted, but eventually lost the mental wherewithal to continue the hack every week.
Until Firefox leadership decide to make the same change "because that's what Chrome does". Sadly, over the history of Firefox (and before that, Mozilla/Seamonkey) the leadership there has always been WAY too obsessed with following IE and/or Chrome rather than just building the best browser and taking some chances.
Seriously, trawl through Bugzilla sometime and look how many bugs are closed with the the justification being some variation of "That's how IE does it" or "IE doesn't support that", etc. And then substitute "Chrome" for "IE" later in history once Chrome took over the universe.
Yeah, I switched from Firefox to Epiphany (also called GNOME Web[0]) and I have the best of both worlds: a WebKit browser that's more feature-complete than luakit/surf/etc., with Firefox sync integration and no Google.
It's like an open source Safari clone and it works beautifully.
Same here. FB chokes FF every other time I open it. My CPU and RAM consumption spikes. I have to close FF and reopen. Sometimes that doesn't work either.
Whatever lag FF has is negated 10-fold over Chrome if you install uBlock Origin since there's no more tracking or ads. I can't even imagine going back to mobile Chrome or Safari, I'd rather go back to a basic phone.
Firefox focus isn't firefox, though, it's a private-browsing-only browser, and it's built on an entirely different engine (chromium, iirc). There's no tabs, history, saved logins, or anything else. It's far from being a full-featured browser. Want to disable javascript or images for a while because you're on data? Too bad, you can't.
I'm using Firefox Klar (Firefox Focus for Germany): I have tabs (you can't open an empty tab, but you can open links in a new tab), the session history is enough for my needs, and I can disable JS under settings (and also web fonts, but not images (that's a pity)).
For logins I'm using Keepass Android Offline, Firefox Focus/Klar can use it as autofill service (neither normal FF nor Chrome can do this).
So for me FF Klar/Focus is the best Android browser at the moment, it superseded the normal FF (with ad blocker) on my phone.
You're right that Focus is not a full fledged browser, though it is actively being revamped to use GeckoView instead of Chromium. I think the change is in beta right now.
Mozilla is also starting to put more resources into Android (GeckoView, etc). Hopefully we'll see some exciting things in this realm soon.
Focus is so crippled, for no conceivable benefit (as a user), that I find it unusable. A nice concept, but it can't even switch tabs without causing a full page reload, which is like reverting back to the pre-tabbed-browsing stone-age.
And, as mentioned, it's not even Firefox. It's just a cache-less webkit wrapper.
Even if you believe this is in theory a good idea, in practice it's clear that Chrome has implemented this extremely badly.
As Comment 5 on that issue points out:
> This does appear to be inconsistent/improperly implemented. Why is www hidden twice if the domain is "www.www.2ld.tld"? [...] If the root zone is a 301 to the "www" version, removing "www" from the omnibox would be acceptable since the server indicated the root zone isn't intended for use. This isn't the behavior, though.
> If example.com returns a 403 status, and www.example.com returns a 404 status, the www version is still hidden from the user. The www and the root are very obviously different pages and serve different purposes, so I believe the should be some logic regarding whether or not www should be hidden.
It's not very difficult to come up with a simple algorithm that checks HTTP standard responses and implements this in a sensible way: it seems Chrome's developers haven't even stopped to think about how this should be done properly though.
This is not so much a new policy issue as a buggy implementation issue.
This is the third story today about a Google's handling/intentions towards web addresses and general webpages. The theme is Google wants/may want to push a new set of Google orientated web standards. This seems the least likely of the three to be strategically related,however.
The other two: Google may/may not be pushing the open web standard towards their own AMP pages, instead:
I stumbled upon this bug too, and here's why this is not okay to me:
For a couple of hours, I thought Citibank Singapore's website was down.
If one tries accessing citibank.com.sg, there's no redirect to the www. subdomain. (That's still the case, if anyone wants to try.)
If Chrome didn't hide the www., I would have been able to tell from Chrome's search/address bar that the various banking services that I've been accessing were all on the www. subdomain.
While that is shoddy implementation on Citibank's part, hiding the www. most definitely didn't help with the troubleshooting process.
I think the downvotes here are uncalled-for. This is correct. Google is leveraging a dominant market position to make seemingly frivolous changes that will ultimately benefit them commercially.
Come on, you know where this is going:
They are going to hide amp subdomain, so you don't know if you're looking at AMP or the actual destination. And then suddenly the whole world funnels through AMP.
> Come on, you know where this is going: They are going to hide amp subdomain, so you don't know if you're looking at AMP or the actual destination. And then suddenly the whole world funnels through AMP.
That's probably the reason for this utterly bizarre change.
isn't it because urls are Number 1 for phishing thou?
wouldn't removing urls altogether and move towards full identity checks be better for the web?
and relying on citibank singapore to hire the right people to fix their website... never going to happen.
This is idiotic and harmful. We already lost information about the protocol, because somebody believed it is "too complex" for users. Now we're losing other parts of the URL. It's making a joke of the SSL/TLS padlock, too — what exactly is the padlock supposed to tell me? It used to signify that a "known authority" certified that I'm connected to whatever I see in the URL bar. But now that browsers take liberties with modifying the URL bar as they see fit, it becomes increasingly meaningless.
You can still click and see the whole URL. This is just making it easier for the average user to see the most important thing to them, which is the domain name.
It's not like they're just changing stuff randomly. The TLS padlock change has been going on for a while now, and not without reason. As we get to a point where almost everything is served over TLS it doesn't make sense to tell the user every time. It makes more sense to only notify them of the exceptional situation where we're on an insecure connection.
The certificate authority system is terrible, but it's what we have for now. There's been some advances to help make it better though. CT for example, ensures that if anyone starts making fake certs we can all see it at least.
I suspect (and hope) that browsers will slowly transition to using trusted spotters to verify certificates in addition to (and eventually instead of) authorities. If you remember a few years back when moxie marlinspike made that promising, but underspecified cert verification system that relied on the user supplying a list of trusted verifiers and the browser basically goes to each of them asking "I see cert 88:A4:etc" for domain "google.com", do you see the same thing? The idea was to make it really hard to MITM someone since you'd also have to MITM every verifier the browser asked. Not impossible, but probably harder than getting a fake cert under our current system.
Clicking doesn't reveal the URL. You have to click and use the left arrow key, at which point the protocol and www prefix appear.
Just like when Chrome hid the protocol part of the URL, you can even click in the URL bar and copy it without seeing the protocol (or, now, www prefix) at all. I think it results in a confusing experience when you paste it. (Ordinary users will say: "I copied example.com but I pasted https://www.example.com … why??)
> "This is just making it easier for the average user to see the most important thing to them"
This is exactly the wrong way. The domain name system is simple, easy to learn, partly, among other, exactly because it is without ambiguity. It has been an essential part of our lives for several decades by now and users should be expected to undergo the effort of looking into how it works for 5 minutes once in their lifetime. (Arguably, parsing a URL is an important and essential skill nowadays, like adding.) Obscuring it and introducing ambiguity doesn't only not help, it is an essential hinderance to understanding.
> You can still click and see the whole URL. This is just making it easier for the average user to see the most important thing to them, which is the domain name.
Thanks for adding a step when that user's most important thing is telling us folks supporting them what the actual URL they went to. From other folks in this thread[1], it isn't as simple as just a click.
> This is just making it easier for the average user to see the most important thing to them, which is the domain name.
It's not like they're just changing stuff randomly.
Can you link to the user study or general cost/benefit analysis or something else saying it's not random? I'm having a hard time concluding that the cost of removing parts of a domain name only in some cases is outweighed by the benefit of removing a few characters from the user's address bar.
They didn't care so far because it was so confusing. The hope is that by showing something that's user-relevant (the name of the website name and the security level), it will become more useful for the average user.
What if the user sees "Wik1pedia.org/wiki/Canada"? Or "аррӏе.com"?
Hiding random parts of the address isn't going to make browsing the web better. The main purpose of URLs is for hyperlinks, not as a highly intuitive user interface. Users who don't know how URLs work don't care what is up there. They only care that what they are looking at is what they expect, and a way to get to where they want to go. And that's a complicated problem.
The URL bar hiding thing isn't for users, it's for Google to push Google search. That's why they attempted to remove the URL bar entirely four years ago and replace it with a search bar.
Also, if you go into about:config and turn off keyword.enabled, the address bar will no longer search.
It's very useful if you don't want to Google internal/client URLs just because you accidentally copied a space at the start or the hostname doesn't resolve in your current environment, etc.
That has nothing to do my argument. Just because people can't detect homoglyphs doesn't mean we should keep overloaded urls and shouldn't strive to make them more user friendly.
There's still value to be gained from having easily readable urls.
Right. So messing with the URL bar is pointless. If people actually don't like it, get rid of it, but provide some other means to establish authenticity of what you're looking at.
do not have to be the same website
and there are cases where it isn't the same site!
Which site you visit does matter and not just for internet banking.
If it becomes that easy if we hide things maybe we should hide the half of the traffic lights as well.
Those examples don't prove anything. First off, the http ones would appear different, they would have no lock. And 99.999999% of websites don't have a different www. and non-www. page. Showing www. for that 1 in a billion site, for that 1 in a million user, is insane.
Because arguments aren't always useless like in your example. Might as well just do away with the whole URL bar and just have a green checkmark if Chrome thinks it's the site you want.
I mean, why should a user see "Wikipedia.org/wiki/Canada" when all they care about is "This is the Wikipedia Page for Canada"?
I know you’re being sarcastic, but if chrome could, with perfect accuracy, indicate if this was “the site you want”, why not do away with the url?
Mind you, I’m not suggesting to do away with linking, as some rando suggested this implies. (While chrome doesn’t show the protocol prefix, it still copies the prefix when you copy the url, so imagine a similar ui.) But for most users, wouldn’t a ui that shows “server identity” in some more user-coherent way be what they want?
In particular, do subdomains help or hurt phishing detection?
You're making a hypothetical based on "with perfect accuracy"... but the much simpler change here (www vs. no www) is not done "with perfect accuracy", as clearly outlined by a bunch of comments in this thread.
So you're saying we should show all users www., just because that one site in a billion that shows a different non-www. page, for that one user in a million that would even notice that difference? Chrome is a browser for the mainstream people. The feature you're asking for is for power users and extreme edge cases.
If we could, I'd be all for it. Have an option that says "show URL bar or not" and by default hide the URL bar, optionally show the whole thing. Especially on space-constrained devices like cell phones where every pixel counts. Just show the page's title.
I think we're a long way away from that ideal, though, and some web pages may not be designed with this ideal in mind.
If the browser absolutely 100% of the time knew that difference and showed "Wikipedia FR > Canada", wouldn't it be much simpler for the average user?
The browser could even show specialized UI such as "FR" as a clickable dropdown menu to allow users to switch languages. Chrome already does this for searching a single website through the address bar (type domain.com TAB)
Basically these changes are not thought for you.
You are not representative of the average Web user.
No, it's really important to retain all those dots and slashes. This is not sarcasm. I'm being completely serious when I say this. It's really easy notation, and the differentiation of context for dots, slashes, question marks and hashtags is really useful.
Wikipedia FR > Canada
I look at that angle bracket with the white space, and I get chills. And I'm not even drawing attention to that oh-so-glaring omission of the /wiki/ context. Truly horrifying.
Hiding the URL would be a terrible idea, no matter how much "simpler" it would be for the average user: it would either only be enabled for a handful of websites chosen by Google (which would mean having an inconsistent UI) or create a lot of security issues (what if someone creates a website and manages to also display "Wikipedia FR" with a similar layout?).
Maybe it could show "Canada - Wikipedia" like the page authors intended as a title. Maybe if the page authors want to have links between languages, they could code that in a standard markup language themselves.
These aren't decisions for a useragent to make, and there aren't enough browsers out there that people have a reasonable choice.
Bastardising the url isn't a solution to anything, it's a step towards something that Google, not users, want (in making AMP "trivial").
A useragent is literally a user's agent, it is supposed to help its user browse their target website. It is not supposed to help the target website show things to the user.
It matters because deep-linking is possible on the web.
Hiding it just seems like a trivial UI matter that makes things slightly more obnoxious when you do care.
I'd be surprised if only showing the domain vs domain+path made any difference on phishing results.
I don't think these little tricks do much for the user. For example, browsers now highlight https websites with green in the url bar and show a little lock icon. But how is that actionable information for the user? To what extent does that mean you can trust that website, and how does the average person interpret it? Phishing websites use https, too.
I would avoid stealing any pages from Safari's UI. That browser doesn't even show favicons on browser tabs to let you quickly distinguish them.
You're missing jwr's point. He's arguing that this is harmful for users, especially the ones who don't know what the words mean.
If I were solving this, I'd instead push to eliminate "www" altogether, not sweep it under the rug. It was useful circa 1996, when users might plausibly be using something other than the WWW with a browser. But it has become entirely vestigial.
It's not the same issue at all, in that domains with different suffixes are controlled by different people while foo.com and www.foo.com are controlled by the same people.
If you have an example of somebody who needs to serve different web content for foo.com and www.foo.com, I look forward to seeing it. But I've never seen one, and when I've seen it happen accidentally it's due to idiocy.
> while foo.com and www.foo.com are controlled by the same people
Sometimes. Far from always.
In some environments, `www` may be under an entirely different administrative domain, with lesser authority than the top level domain which is delegating web services to the `www` group by way of creating a dns record and/or adding an http(s) redirect to the parent domain.
Having some string values arbitrarily considered trivial is dangerous.
See:
lbl.gov has address 128.3.41.146
vs
www.lbl.gov has address 35.196.135.136
The root domain points to hosts at the lab. The subdomain has been delegated off to Google.
This may be true for LBL, but it's not necessarily so. They don't serve different content, and I don't see anything running on lbl.gov that couldn't be handled.
These string values are already considered equivalent, which is why Chrome is making this change, and why every reasonable site has one redirect to the other.
It's due to different groups controlling different parts of the infrastructure, allowing for separation of privilege -- and is the whole reason www even existed to begin with.
Often these separate groups aren't part of the same organization. They're a different organization or contractor paid to maintain a web presence.
Yes, but those are decisions made by whoever controls foo.com. This may not be a good decision by Google, but I don't think they should be held responsible for a decision that was made by whoever controls foo.com.
Either way, www.example.com and example.com can differ in terms of IP address, underlying hardware, actual website content, and probably other things I don’t know. They are different URLs. It seems problematic to assume they are the same.
Not at all. People already assume they are the same. And they have for 20 years. Nobody reasonable serves up different content on the two URLs. Anybody clueful redirects one to the other. The only reason they're separate is that a) the web wasn't dominant when it was introduced, and b) technology of the time made it hard to manage traffic in ways we can now.
The 9th comment is explicitly described as "bad results"; it's about somebody who doesn't have a redirect. So that for me is in the "unreasonable" category.
The 3rd is about pool.ntp.org, which is a random ntp server, and which shouldn't be serving up web content. They did happen to pick www.pool.ntp.org as the URL for the docs on the NTP Pool Project, but if "www" never was a thing, the would have happily picked something else. E.g. poolproject.ntp.org or ntp.org/pool/ would have been fine.
I realize it is an important distinction, I'm glad you do as well.
Just as ftp.mysite.com is not mysite.com and mysite.com in not mysite.io and http://mysite.com is not https://mysite.com. You get the point.
They are all different and important in my opinion. Any argument that hiding the "www." part makes it easier for the user is equally applicable (and wrong) to ".com"
You can keep repeating your point, but if you want to convince me, you'll have to actually address my demonstration that the two are in fact not equal in practice.
I'm not arguing for Chrome's implementation. I'm saying we should do the more useful but harder thing of just not using "www" as a thing in browser URLs. They have correctly identified it as redundant, but instead tried to fix it by being too clever.
As I mention elsewhere, the first two are bad examples. (In fact. ntppool.org and www.ntppool.org are the same thing.) The third is a hack from the era where responsive design, browser sniffing, and polyfills didn't exist. It should probably die too, but doesn't have to here. The m.tumblr.com name is distinct from tumblr.com and is of the form I think better. Note that they didn't use www.m.tumblr.com.
With "only" 46.5% of all domains being ".com" I guess you are technically correct. When the next highest TDL (.org) rings in at only 5.1%, I think we can agree the the overwhelming majority of sites are based on .com.
Of the TDL you mention the top one is .jp at less than 2%. If you remove the qualifier under the .uk TLD you get an additional 2%. The rest don't make the chart.
It's actually very useful for isolating access to the root of the domain. Say for instance you use a third-party SaaS/CMS to host your website and have other services on other subdomains. If it's hosted on the root of the domain it had more power than if it's on a subdomain.
> If I were solving this, I'd instead push to eliminate "www" altogether, not sweep it under the rug. It was useful circa 1996, when users might plausibly be using something other than the WWW with a browser.
No, it was useful when abstracting machine identity from domain names such that there was a many-to-many relationship was less common, so “www” was the most specific domain name element for the server being accessed. (And a system that needed more than one server might have a homepage on “www”, and various subsites and apps on “www1”, “www2”, etc.)
OTOH, there may be places that still allocate servers that way for simplicity.
Modern solution to these issues is having SRV record on appropriate protocol sub-domain which AFAIK (and somewhat surprisingly) is honored by most modern browsers.
Well, might as well drop then entire stuff after the domain.com/{dump all this out} (the file path) since non-techy people don't really care about it. All they care is about clicking links and navigating...
Maybe the address bar UI will next hide query string parameters because they are an implementation detail? So Google News would be displayed as "news.google.com" instead of "news.google.com/?hl=en-US&gl=US&ceid=US:en".
There thankfully is a setting to show the full url, but yeah, Safari on macOS does that by default. The change occurred around Lion (give or take a major version) if memory serves me well.
From the business point of view that would make perfect sense. The user wont be able to remember the url and even second guess it so that would need the "help" of google, cause everyone knows that "google is your friend".
TBH, so long as there always remains the option to show the full URL, I'd be totally fine with completely hiding it by default. Safari all but does that right now.
99.9% of users don't know either way. The change neither improves nor harms their experience, it merely obfuscates what's actually going on under the guise of "user friendly."
Presumably your question is for the Chrome team, and I'm gonna assume the answer is: yes, of course they did research. They did research for the padlock, etc too.
Anytime the "average user" or numbers like "99.9% of users" are mentioned, red lights should go off. These kinds of claims are condescending, often untrue, and rarely based on facts.
Let's dumb down the internet even more because non technical users feel confused. Maybe we can actually remove the url field and just let the isp decide where they should go? They could offer a choice of 10 popular sites, like TV channels.. :)
We're always going to need to see URLs. They're not going away. They're just talking about a better default view to show the user relevant information about where they are, which would be a good thing.
Being generous, maybe half of users can look at a URL and immediately identify the domain they are on. That's terrible, and goes a long way to explaining why phishing attacks are so prevalent.
I'm actually pretty excited to see what they come up with. If it's even marginally better than what we have now I'd be all for it. I'm guessing they'll end up showing some combination of the domain and page title by default (which might incentivize more sites to FIX THEIR GODDAM TITLE SCHEMES!).
I can see the future now: User enters his target (mybank) into the Google search bar on his Google Android device, the device opens Google Chrome with the Google amp page form mybank already loaded. The user never has to worry about urls or where exactly he is entering his banking information and login critera. Google makes everything a clean and seamless experience and user never has to leave its warm embrance.
Add eyeball tracking into this mix, and we can "allow" users to "experience" unskippable ads.
In all seriousness, I have no doubt this is due largely to Google's frustration at Ad Blockers. If there is no URL, and you're in the Google Garden protocol, there is no way to block ads, or at least no way to NOT download them.
Great idea! Hey, since these are non-technical users, why don't we just eliminate those pesky hard-to-use computers and just put the whole thing inside their TV? Y'know, kinda like 23 years ago... https://en.wikipedia.org/wiki/MSN_TV
This is a very bad analogy. Anyone in a car crash potentially benefits from airbags without knowing anything about them (or even if they exist at all).
The 99.9% of people who don't even know the difference between www and non-www will never directly benefit from seeing www, ever.
> The 99.9% of people who don't even know the difference between www and non-www will never directly benefit from seeing www, ever.
You don't need to know the difference to be able to read the URL off, potentially to someone who does.
It's not impossible (though it's not a good idea) for “example.com” and “www.example.com” to both host web content, and whether or not they know or care about the meaning of the domain name, someone accessing one should be able to, in the event they have a problem, be able to read off which one they are accessing to the person trying to help them resolve the problem.
We must avoid friction between two types of user, 99.9%er and 0.1%er. We could have separate browsers aimed at each.
One browser should be dead simple, secure, and streamlined, aimed at the 99.9%s. Maybe it could be named after a metal.
Another browser, for the 0.1%s, should include technical arcana on screen and have more mutability, perhaps even at the cost of some performance and security. This one could be named after some kind of canid.
Google wants to destroy the URL so the only way to find something will be via Google... They also want to tie your identity to each webpage that you author via a certificate so that all governments can clamp down on fake information or have the opportunity to in the future.
Given the adoption rate of SSL, I imagine the padlock itself will become useless even without Chrome's changes. Does it mean anything if almost every website has it?
The clearly announced intent of at least Mozilla and Google (and I'd assume Apple and Microsoft but I don't pay as much attention) is to focus on highlighting the insecure state because that has much better security UX. Labeling one site you visit today "Not Secure" stands out. With luck it might be enough that you don't type in your credit card details.
That's why Chrome is moving away from showing the padlock to displaying "Not secure" for sites that aren't secure. The padlock will be going away entirely; secure is the default state.
No, the padlock means that you are likely connected to the website that the URL bar shows you. This is useful and should not be discarded because of condescending ideas about "average users". It also has the advantage of being easy to explain.
Some people assign additional meaning to the padlock, which should not be done. It doesn't mean you are talking to your bank, it only means that you are talking to the website shown in the URL bar and that reasonable (simple) checks were performed to make sure that is the case.
I'd suggest we invent something better before we start breaking it.
It started being meaningless thanks to Let's Encrypt. Before it meant you had to show your ID and banking info to a "reputable" corporation for them to make a cert for you. Yes I know I know, not always the case, but...
LE means that the mantra "if it's https then it's a secure and reputable website" is now outdated.
> Before it meant you had to show your ID and banking info to a "reputable" corporation for them to make a cert for you.
No it didn't. Let's Encrypt made free certificates easier to get, but Let's Encrypt doesn't do less verification than some other CAs/some of their products.
How do you verify the authority owns the content? For instance: AMP urls serve content from a different authority than the one that produces the content.
While sure, www seems odd now, it's still a subdomain and we're inching into territory of obscuring things that matter for small gains in end-user perception that aren't _that_ impactful.
My bet is this decision is driven somewhere by marketing morons who want 'www.google' (presumably a domain that will at some point exist, the TLD already does) to render as simply 'google'
Technically there's no reason why a TLD can't have an A record. If they own "google." they can point it to whatever address and serve whatever they want.
I've heard people say "backslash" when they insist on reading out the whole URL with protocol, which I'm pretty sure is the wrong slash, I honestly don't know, but I've never understood why people felt the need to say it at all. Do they type the protocol into the address bar when they visit sites?
My guess is they’re patiently trying to train users to prepare them for a post-url era. I don’t remember where but I recall hearing the google was trying to replace the standard. Sort of like how Apple has had to retrain people to not think so much in terms of files but in terms of apps.
This is going to be a problem weather or not Chrome changes www.example.org to example.org. There is a _very_ non-trivial chance the person was going to write down example.org anyways.
If in most uses minds, "www.example.com" is the same as "example.com", then "example.com" is less confusing because they have probably never heard of the word "subdomain".
My admittedly anecdotal evidence suggest that they do not. From my other comment: I worked as tech support for a large org (300+ users) most of my career and have dealt with most types of users. I've only seen them interact with the address bar in one of two ways: explorer shortcuts on the desktop/browser bookmarks (few) or stick-it notes on the monitor or keyboard (many).
If you count only the people who require tech support then of course you're only going to see the people who require tech support. But they're not the only users.
Developers and techies are users too and they're much less likely to call tech support in general.
This harms them far more than it helps the people who need help.
And this move was made under the guise of improving ux for normal users and not users who know how subdomains work. What's the point you're trying to make?
Same exact reason that api.domain.com or beta.domain.com matters - you're on a subdomain, not the root domain. That the internet and world at large made www the "kind of" root domain in many cases is an unfortunate thing, but I've not seen a modern server configuration that doesn't handle this case by default.
But when using HTTP(S), you nearly always expect the "www." domain and the root domain to host the same content. It's very, very rare that this isn't the case. Chrome appears to be hiding "m.", too, which is unfortunate (as would api. or beta. being hidden), but "www." is such boilerplate for HTTP at this point that I don't think it matters whether it's displayed or not.
I think it matters, quite a bit actually. My expectation as a user is that the URL I see in the bar is the URL of the site I'm visiting. If it's not accurate, then why show it at all?
Because the second level domain is far more important than the subdomain. The second level. www.example.com and example.com usually have the same contents, but are always controlled by the same group.
It's not true that the second level and third level are always controlled by the same group (most of .uk for example). You can put an SOA record anywhere.
But the entity that controls the second level domain always controls all of its subdomains, not just www. Historically www has been a special case, but that's not a requirement and that's on the decline.
It seems like the confusion this change causes negates any benefit it could have.
This is just going in a circle. If they don't care about it then it makes no difference. A few characters in an address bar that they pay no attention to is not significant noise. Meanwhile the people that do care have less information.
They don't need to care about checking whether it's there or not or seeing it at all. But in a general sense, users do care about reduced clutter and aesthetics, and this is a way of improving aesthetics and creating a more consistent URL bar. It's a tiny bit more consistency and a tiny bit less clutter, but it's not nothing.
The users who truly care can just click the URL bar. (Someone said you also have to press the left arrow, which if true seems like a bad decision. I would agree the full URL should always be visible whenever the cursor focus is in the URL bar.)
If the browser uses some technique to detect that www.domain.com is functionally identical to domain.com for a given domain, then I don't see a serious problem with this. But if they are short of that certainty, they're obscuring a critical part of the URL, and harming usability (e.g., if I want to jot down a site's URL for later use, I might get something unexpected).
That's not the case. I run a server that does not respond to "www." If I enter "www.myserver.com" into the address bar, I get a DNS lookup failure, but the address bar is now showing "myserver.com." That's damn confusing, and this is an idiotic default on Chrome's part.
Likewise, something is saying that their bank only works through www.bank.com. But the URL says bank.com. If they try to type in bank.com it doesn't work.
But hey, now we don't have to see www which has been around forever and is a surprise to no one!
Funny enough, I visited a site for some task or other just yesterday for which I just typed the base domain, and got ... nothing. It still required the "www." prefix -- no redirect, "ANAME" DNS-side hacks, VIPs or anything in place to make the base domain a reachable URL. This new Chrome change, if it's truly doing naive subdomain hiding, would be a really bad UX for sites like that.
Well, browsers already support a better form of this feature. On the server you setup a redirect to your preferred domain. You redirect https://www.example.com to https://example.com or vise versa.
> If the browser uses some technique to detect that www.domain.com is functionally identical to domain.com for a given domain, then I don't see a serious problem with this.
Say, for example, if the canonical URL doesn't have a "www" in it?
"If you click into the address bar to copy/paste, the full URL will come back."
Which is also terrible behavior. Unwanted characters hidden into your paste buffer is at best unexpected and capricious behavior and at worst a source of serious, possibly catastrophic consequences (depending on what, and where, you are pasting).
How soon until a doctored up paste buffer contains, by design, a newline character ? I'm sure there must be some use-case that (appears to) call for this ...
Thank you for validating that I'm not the only person who hates Chrome's horrible clipboard behavior. When I highlight something and copy it to my clipboard I expect exactly what I've highlighted to be on my clipboard-- not some editorialized version of it.
Lest anyone think that Chrome is being innovative here, this is Safari's default behavior for when the URL bar isn't focused. When you click on the URL bar, the subdomain, protocol, and path all appear.
Which drives me crazy, because my users send me screenshots in support requests. Now I have to spend even more time explaining to them how to click/copy/paste the URL. It was a terrible UX choice when Safari did it, and it's a terrible UX choice now that Chrome has done it.
That's just a bug which I'm sure will be fixed in the next release.
For this to actually help scammers (after the bug is fixed) they'd need to own www.example.com but not example.com, which is unlikely to say the least.
Dear Google, just do syntax highlighting. Make the subdomain gray. You can even color https: green and http: red. But don't hide them. I really think syntax highlighting will accomplish the same thing for less technical users.
Why does this matter? Users don't care and its easier to remember/understand that all websites are just "x.com" rather than sometimes being "www.x.com". If you have some server/troubleshooting/network/dev problem with it, the missing info should be moved to developer tools.
This is just removing data that is useless and confusing to 99.9% of users - whats the problem?
Now every single website that wants to support Chrome needs to ensure that https://foo.com is always redirected to https://www.foo.com, or at least works as if it's www. It doesn't matter that most websites already do this, it's not standard, and represents Google breaking standards because they are big enough to do their own thing.
It's just one of the 1000s of papercuts that google is inflicting to keep users from switching web browser.
Seriously? This is such a bad change, I hope they revert this patch in the next update. There's a world of difference between the two and this is going to cause an ecosystem nightmare.
> This is a dumb change. No part of a domain should be considered "trivial". As an ISP, we often have to go to great lengths to teach users that "www.domain.com" and "domain.com" are two different domains...
What ISPs teach their users anymore these days? Why the heck do we want to go back to that?
Time for a modicum of historical perspective.
If you care about usability this is clearly an improvement. This is part of a long-running industry trend -- Safari does this too -- to improve the usability of the Internet and technology in general.
At EVERY STEP in that journey there has whining and griping from the more technically advanced folks (like all of us on this forum). They zero in on the negatives, the tradeoffs that come with simplification. They don't see the positives because they're technically advanced enough and don't benefit from simplification (or so they think).
Then after the griping whines (sic) down and they realize the world didn't end and the downsides really weren't that bad and we move forward towards a better, simpler, more usable web.
Like, who actually runs separate HTTP servers on example.com and www.example.com anyway? Everyone is hyperventilating over contrived "the principle of it all" examples. Bottom line, Apple & Google are putting usability above technical pedantry. That's the right priority for mass market technology products.
The first citibank url doesn’t load for me, so it is site vs not site rather than two different sites, and both those ntp urls are redirecting me to https://www.ntppool.org/en/ - perhaps there’s less substance than meets the eye to those complaints.
No, actually, there's a really good complaint here -- if you're hiding "www", then the two Citibank URLs look the same when they are actually not in Chrome, and the user will be confused when typing in the URL that they've visited many times before and not actually being able to visit it because now Chrome obfuscates the "www" part.
I think it's more on the website owners to fix their sites, users expect domain.com to be the same/auto-redirected to www.domain.com or vice versa.
I think it's a good thing what Chrome is doing, it will push website owners to correctly set up their domain redirects and in the end, lessen end-user confusion.
If I asked any non-technical person under the age of 20 the difference between www.google.com and google.com they probably couldn't tell me. Users expect www.example.com to equal example.com. If a website isn't redirecting one to the other, they, are doing something that is incredibly anti-user and it is a good thing that google/apple are forcing them to do it different.
This is not about users knowing the difference between www and no www on a site that redirects one to the other, but about users who don't know the difference on a site that doesn't. See my comment https://news.ycombinator.com/item?id=17930243 for an illustration of one of the issues that may occur.
> Users expect www.example.com to equal example.com.
No, they don't. The expect that the address they write down works when they type it back in the address bar though.
> If a website isn't redirecting one to the other, they, are doing something that is incredibly anti-user and it is a good thing that google/apple are forcing them to do it different.
You keep claiming this with any sort of evidence or backing. This may be anecdotal, but I worked as tech support for a large org (300+ users) most of my career and have dealt with most types of users. I've only seen them interact with the address bar in one of two ways: explorer shortcuts on the desktop/browser bookmarks (few) or stick-it notes on the monitor or keyboard (many). I'd be willing to bet my next paycheck that not 5% of them are aware that google redirects to www.
At least for those users, none of them will benefit from chrome's mishandling of www. Many of them will suffer from it. I'd be willing to stand corrected with a proper study though.
Also, subdomains aren't only used to make http urls pretty, they are intended to be used to refer to actual physical hardware hosts that belong to a given domain. All published standards I know of are written with this in mind. None of them mandate that www be synonymous to the parent domain, not even those responsible for web technologies. Organizations that follow standards are not at fault for following standards.
If having two different hosts on www and the parent domain was truly "incredibly anti-user" (a dubious claim), let google introduce a rfc at relevant standards bodies and have it go through proper scrutiny first.
I think this is a fair comment but Chrome has been doing this a lot lately. They'll make a change and developers must scramble to fix their websites.
It's the same with autocomplete earlier this year. One day Google decides to ignore autocomplete="off" and all hell breaks lose.
Interesting to note, they have reverted this change. Google now respects autocomplete="off" in some scenarios (i.e. when autofill is not triggered via name attribute).
If this is the worst example anyone can come up with, debugging a misconfigured site while relying exclusively on screenshots of beautified URLs, then I think it proves my point.
There will always be tradeoffs in advancing usability. This is objectively a small one. The problem is the unstated lack of appreciation for the value of usability improvements, because it's usually a more technically sophisticated person criticizing it who's comfortable with the way things have been. If you care about usability, that is an immensely net positive gain.
How is having to "just know" that you have to type www to get that page to load, despite it being presented without www a step forward for usability for technically unsophisticated people? It just seems confusing to me. I get that it looks "cleaner" but I am having a hard time figuring out how it makes anyone's life easier, or how it actually makes the web more usable.
And just because it's not a good practice doesn't mean that making both www and non www version look the same to the user and having it "just randomly not load" is a step forward for usability.
It seems like the Internet is moving more to a "centralized" design where certain actors have decided "well, here's how we're going to do things now, deal with it".
The golden age of the Internet is already dead, guys. We're unfortunately over the hump.
This change goes far beyond merely hiding the "www." prefix. I'm not making up these examples:
1. "m.tumblr.com " and "tumblr.com " are BOTH displayed as "tumblr.com" even though they're literally different sites.
2. "www.example.www.example.com" displays as "example.example.com" which means all "www" subdomains, whether leading or not are being stripped out.
3. In the extreme case, "www.m.www.m.example.com" shows up as "example.com" which is pretty misleading.
Usually, the Chrome team is very thoughtful about decisions that impact security. I'm surprised this was released in such a half-baked state. I hope this is not an indication of how Google's plan to "kill the URL" will work out.
These types of usability steps disempower the user from having control. It doesn't surface what's actually happening and tries to "fix mistakes" with blunt overly-generalized presumptions.
This isn't the first time it's happened.
Take the https-everywhere changes. If you explicitly type something like "http://example.com.:80/" but the browser knows about an SSL cert for the domain, it will attempt to shuttle you off to https, failing to do the SSL handshake because of course it's port 80. Adding the protocol and the port isn't enough of a hint that I know what I'm trying to do.
What's worse, this is a domain-wide setting. If you have say, local.example.com, the browser will try to protect you again.
You have to go to an obscure preference and make the browser forget about this on a domain-by-domain basis, which will reset the next time it gets a wildcard SSL cert for the domain.
This thing is of the same vain, it's the new Ctrl-Q of the browser world; a feature that, due to some dogmatic ideology, doesn't have a "knock off the bullshit" setting.
Somehow irreversibly child-proofing software by making it not do what you tell it and hide important details has become fashionable, it's nonsense and needs to stop.
You begin by criticizing those who would over-generalize, and then proceed to over-generalize by complaining about some SSL stuff. Again this is "the principle of it all" argumentation that doesn't cite any actual concrete real world issues with the change at hand.
I haven't used it yet so I can't cite any actual concrete real world problems it solves either.
Are you claiming an attitude of hiding important technical details and creating dysfunctional smart systems in the name of usability doesn't exist? Or that these are unrelated things and the narrative arc isn't there?
There's numerous other examples, such as the gtk3 file-chooser dialog which hid a number of important controls in the name of usability.
What about mobile vendors, to make their devices friendly, remove a number of android customizations such as the ability to disable ambient display?
What about sites like reddit who removed a number of features such as the ability to edit a comment on their mobile site as a UX improvement?
Or what about the laptop vendors who remove keys such as Escape?
It appears that all technologies are slowly tending towards an aspirational goal of being designed for illiterate toddlers.
Luckily I know where this comes from.
Illiterate toddlers, as it turns out, use lots of software on tablets and smartphones. Looking at usage metrics and presuming that everyone is a competent adult, efforts at user-error reduction tend towards humans with diapers and pacifiers as they make the most errors and any demographics statistics of those errors will false positive as their parents, since the infants don't have their own devices.
Why do you think bright colors have better click thru rates for ads or using cartoon mascots or smiling men and women of child-rearing age increase inbound traffic and are seen as an important branding strategy for user engagement? Could it be toddlers tapping?
Maybe that solves the problem of why these boosting effects only appear on mobile devices since 3-year olds can't operate a mouse.
These trends give rise to design rules and "insights" which have a contagion effect on all software, just like the soft keyboard on the iphone led to the removal of physical keyboards from effectively all smartphones. Until every complex tool is dumbed down enough to be mastered by those who haven't learned primary colors or geometric shapes yet, this insanity will continue.
Look at Youtube's recent redesign moves for example. They recommend the video you watched yesterday again and do so for weeks. The only people that want that are under the age of 6 who watch videos on loops. I'm confident they have strong empirical data mixed with the failure to properly segment users to back this decision up. Toddlers are controlling the direction of software and it needs to stop.
What exactly is the usability improvement from hiding part of the domain name? Maybe we should be hiding ".com" because that's trivial too? Better yet, why show "google" at all if from the page it's clear you're on Google? Might as well just fullscreen the content pane and be done with it.
> What exactly is the usability improvement from hiding part of the domain name?
Quite simply the www subdomain is confusing and unnecessary. See comment from the ISP admin I cited re: user training.
> Maybe we should be hiding ".com" because that's trivial too? Better yet, why show "google" at all if from the page it's clear you're on Google?
Those aren't really serious counterexamples. ".com" is obviously not trivial; there are plenty of TLD variants in use. It is approximately 0.0000000001% as common to run separate HTTP servers on example.com and www.example.com. And clearly different domains can spoof one another's content so that's not a way to be "clear" you're on google.com, whereas this is not generally an issue with subdomains.
Again your reaction is just sort of knee-jerk exaggerated resistance to change, not actual real world problem cases.
> Quite simply the www subdomain is confusing and unnecessary.
Sometimes it's unnecessary. How is it confusing? Millions of non-sophisticated users became sophisticated users typing it, millions more type it every day. It doesn't seem prima facie more confusing than a pronoun or other oft-repeated article. Consider the beginning of my last sentence in this paragraph -- would you consider the "It" confusing, even though it's not strictly necessary?
> Again your reaction is just sort of knee-jerk exaggerated resistance to change
Perhaps your reaction is knee-jerk teleology of change as progress?
As a suggestion: maybe spend less time characterizing the approach of people that you disagree with on this topic, and more time articulating actual arguments ("the www subdomain is confusing and unnecessary" counts, even though it's arguably not particularly strong), unless you'd eventually prefer it when people make the discussion partly about the shortcomings of your approach, which are far more glaring than you've clearly spent time considering.
> They don't see the positives because they're technically advanced enough and don't benefit from simplification (or so they think).
I'm noticing despite the invocation of vague concepts of progress and usability... you haven't articulated any particular case for how this represents either. No model for why it's simpler or more usable.
"Safari does this too" or imprecise aspersions about the supposed "whining and griping from the more technically advanced folks" doesn't really cut it.
I could guess that what you mean is "oh, if you can omit something and yet it's implicitly understood, obviously that's a simplification," though that's obviously a guess. If I've misunderstood, well, there's counterexample #1, to get a little meta. If I've managed to guess correctly, the broad topic of language and notation is a fairly rich well as to the potential for ambiguity or outright miscommunicated meaning when things are moved from consistent and explicitly denoted syntax to implicit syntax. Or examples for when the implicitly understood isn't particularly burdensome to use or even require in some cases.
"the world didn't end" .... lots of bad ideas that make things marginally worse don't end the world.
"the principle of it all" Again, this is a pretty vague and imprecise charge. Do you understand what the particular principles people are registering their objection on? If so, why not name them and respond?
> I'm noticing despite the invocation of vague concepts of progress and usability... you haven't articulated any particular case for how this represents either.
I literally began my comment by citing this:
‘As an ISP, we often have to go to great lengths to teach users that "www.domain.com" and "domain.com" are two different domains...’
It takes only a very small amount of thought and empathy with the average user to understand how the extraneous www prefix can be confusing. It can lead to failures like thinking you have to use it with every website. It enables fraud by making “wwwexample.com” look more normal. Etc.
Your comment is a textbook example of “the principle of it all” argumentation. You cite no concrete examples of problems caused by hiding www from the UI. Not that we should expect none, but the right conversation to have is whether the usability benefits outweigh them. Not technical pedantry or “vague and imprecise” warnings about what may come if we let this pass.
That comment seems to be a rebuttal to the point you're attempting to make. It'd seem more apt to say you brought it in to interrogate it, rather than to say you "cited" it, and beyond rhetorical frustration ("why would you do this?"), it's not clear to me that you engaged it at all.
> It takes only a very small amount of thought and empathy with the average user to understand how the extraneous www prefix can be confusing.
It takes only a very small amount of thought and empathy with the average speaker to understand how the article at the beginning of this sentence is functionally extraneous (and is even optional in informal speech), and yet isn't particularly burdensome to use.
Perhaps the thing you're claiming is prima facie obvious with "only a very small amount of thought and empathy" is instead an unexamined assumption on your part and reflects assumptions about the average user that you have no particular claim to over anyone else in this thread.
> It can lead to failures like thinking you have to use it with every website.
"Failure" is a curious term here. The overwhelmingly common "failure" of someone adding it is comparable to the "failure" of forgoing a contraction for its full expansion. Or the article example I used above.
It's technically possible, I suppose, that a www|m.domain.tld record will simply not exist. That's a reflection of the reality that www|m.domain.tld and domain.tld don't actually resolve to the same server, and pretending they do breaks DNS. And not only is it a good bet that the failure we're worried about is more common than the failure you're worried about, the sensible way to address the potential failure case you're concerned about would be to allow an implicit redirect reflected in the URL to take place only if the www record does not exist. That'd be the user agent being helpful instead of making assumptions that break DNS.
> It enables fraud by making “wwwexample.com” look more normal.
This is half a worthwhile point. But only half a point because unless one goes whole hog in eliminating subdomains entirely, you can't really take out example.com.internet2.ru, and even if you did, there's also example-internet2.com, so this is part of a class of problems prefix elimination can't solve, which is a sign that maybe it's not worth it if there are tradeoffs (and there are).
> Your comment is a textbook example of “the principle of it all” argumentation.
You keep using this phrase. It sounds like what you mean is "the people I disagree with don't really have reasons they're just attached to some convention that doesn't matter because reasons." If there's a more precise meaning, try rephrasing.
> You cite no concrete examples of problems caused by hiding www from the UI.
Since your comment didn't contain any clear criticisms of the recent state of things, it seemed best to see if I could elicit those first.
Also, the most prominent concrete problem examples of how this breaks DNS weren't exactly hiding if you read the linked issue.
On a deeper level, though -- and this is an answer to your interrogation of the comment you brought into the thread -- this also handicaps people's ability to actually learn by observation how domains and subdomains and others aspects of URLs work through observation. Presumably your natural response to this would be to appeal to the tastes of the average user and saying they don't care about such things and that's only a concern for technically advanced users and "don't make the user think." Spotting you the accuracy of that model of an average user (which I've yet to see a comprehensive case for), perhaps some of those things are true, and yet, as a combined package, the conclusions it leads to are often wrong. Why?
"Don't make me think" is a starting place for good UX. Not the end. The next principle for really great software might be best articulated as "make affordances for optionally advancing use." Learning how domains and subdomains work isn't required for anyone to use the web -- it never has been, because people have always been offered hyperlinks and search boxes from starting points. But the URL bar offers a real affordance for starting to unpack details of how URLS work (including domains and subdomains). An "average user" may never care to start with, but this isn't advanced tech, it's accessible to anyone who can learn how to parse the parts of a physical address, not by nerdy study but simply by incidental observation... all while not requiring people who may tune it out entirely to make any more effort than they might with the controversial change under discussion. Casting subdomain (or path) details of that language into an implicit and ambiguous new convention makes it less likely they'll pick it up. So even assuming this change can be made w/o breaking DNS -- which doesn't appear to have been addressed -- it's removing an affordance into a simple and relevant (if linguistic) tool for navigation/orientation.
So there's your harms to consider.
Cities and states superfluous from physically mailed addresses these days if all you want is for your letter to arrive. Sometimes it's convenient to even simply give zip codes in some exchanges of address info (or to collapse the whole thing into a bar code). Would you suggest that it's harmful or confusing to continue to have cities and states as an allowable convention in addresses? If some postal/shipping service mandated the convention of leaving out cities/states, could you see that there's a credible case for harm, though it's redundant information?
That's a comparable situation with www.
And beyond the marginal utility of space savings of ~3-4 character widths on a small screen, there's no argument I've seen for a benefit in removing it that stands up to scrutiny.
Are you ok, man? Peering through the pseudo-intellectual smokescreen the only thing approaching a concrete example of a problem is the claim that this UI change "breaks DNS", which is trivially false. No real world examples are cited unfortunately.
And your analogy betrays fundamentally confused thinking. Omitting "www." would be more comparable to omitting "1st floor" for one story building addresses than omitting the city or state. The only address this would "break" is a building with different tenants operating out of "123 ABC St, 1st floor" and "123 ABC St", which is a misconfigured building if you could even find an example of that.
> the claim that this UI change "breaks DNS", which is trivially false. No real world examples are cited unfortunately.
As mentioned in the comment you're replying to here, at least half a dozen such claims with examples are in the comments on the issue/ticket that started this thread, the same one that you even pulled a quote from. They've also been invoked throughout the entire HN discussion. I'm not sure if you missed them, or if you're implying that examples such as singapore banks or m.tumblr.com are simply made up.
> Omitting "www." would be more comparable to omitting "1st floor" for one story building addresses than omitting the city or state.
A building/floor analogy has its own issues, but it deals in enough of the same concepts as city/state/zip that it's serviceable if you prefer it as an avenue.
"1st floor" would indeed be extraneous (though correct) on single floor buildings, so sure, many people might choose to omit it from an address scheme on buildings with single floors. Plausible and not a problem in that case. And of course, people can add it for multi-story buildings where it's not extraneous. Finally, they can even add it on single story buildings if for some reason they're in the habit of using the convention, or if they don't know whether a building has multiple floors but know they want the first, and it's still technically correct and locatable in either case. And that strikes me as a reasonably apt analogy for the state of things before the change under discussion. Not a bad state of affairs really, unless someone wants to make a case that adding "1st floor" when uncertain represents a burden.
Now suppose one or more of the postal/shipping services mandate that "1st floor" is a trivial expression, and will therefore be hidden on all envelopes. Does that seem like a good idea? When an address is displayed for buildings that actually have more than one floor, who will know whether the 1st floor is implied? How will they know the floor wasn't accidentally omitted instead? Does the existence of these questions -- vs the question of whether to add 1st floor or not in the previous state of things -- and and any answers there may be really constitute an experience improvement for readers/writers of addresses?
> Peering through the pseudo-intellectual smokescreen
You know, that might be the sort of thing a keen intellect that's cutting through mumbo-jumbo would say, or it might be the sort of thing that someone who's not confident that their engagement with / responses to the arguments in play speak for themselves. Seems like a bit of a gamble about how it'd come off.
That a misconfigured Singaporian banking site is the worst example in the world anyone can come up with is perfect evidence that the apoplectic reactions are unwarranted, like so many over the history of changes like this in our industry. And my comments are restricted to the "www" prefix.
> How will they know the floor wasn't accidentally omitted instead?
This is nonsensical. This www change does not hide subdomains that meaningfully differentiate among "floors". It only concerns the entirely redundant "www" ("'1st floor' for a one story building"). Accidental omission of "www" a total non-issue... in the real world, where we live.
> That a misconfigured Singaporian banking site is the worst example in the world anyone can come up with is perfect evidence that the apoplectic reactions are unwarranted
The "worst" example? I don't think I turned in a ranking. You asked for concrete examples of a problem. It is one. It's part of an unknown but decidedly non-zero number of examples where the www subdomain meaningfully differentiates hosts in the real world, where we live.
If there's a specific reason this example or others aren't worth considering, that's a bit of goalpost motion, but a more clearly articulated case can be worth it.
> my comments are restricted to the "www" prefix.
I'm glad your comments are. The change under discussion does not appear to be. Per comment #16 under the ticket:
"the domain m.tumblr.com is shown as tumblr.com."
Apparently the policy of identifying some subdomains as "trivial" is not limited to www.
Sortof raises the question -- once a player like Google decides it can designate a subdomain as trivial over its common (but not universal!) redunancy, what keeps them from stopping with www?
> It only concerns the entirely redundant "www"
Commonly redundant is critically distinct from universally redundant.
And allowing domain holders the possibility of treating them as redundant is a distinct situation from unilaterally imposing it.
I agree with the first sentence. MacOS has hidden extensions for... ever? And MacOS is generally considered the easiest to use and most secure desktop OS. So there you go.
> If you care about usability this is clearly an improvement. This is part of a long-running industry trend -- Safari does this too -- to improve the usability of the Internet and technology in general.
I am genuinely interested in knowing if this is a usability improvement. Citing "Apple does it too" is not convincing to me.
It seems to me if URLs are meaningless to you, hiding part of them is meaningless to you so why do it? It seems just as plausible that it's only unusual subdomains that confuse users ("Is foobaz.example.com the same entity as example.com?!"), so hiding common ones might only make uncommon ones more confusing.
The comment you are responding to (mine) literally cites this answer:
"As an ISP, we often have to go to great lengths to teach users that 'www.domain.com' and 'domain.com' are two different domains"
It takes only a little bit of thought and empathy with the common user to imagine how this could be confusing. Are you supposed to type in www with every site? Why did it not work when I typed it in? Etc... It's just confusing and unnecessary.
They're not changing whether you have to type it in. They're making it look like you didn't type it in when you did. This adds to the confusion you're complaining about, as the URL bar will look the same when you type in domain.com and it works as it does when you type in www.domain.com and it doesn't work.
> If you care about usability this is clearly an improvement.
This is a false dichotomy. There are tons of ways to improve the implementation without just hacking parts of the URL out of the "omnibar."
You could split the omnibar and/or allow it to be resized so both pieces of information can be shown. You can use differential color highlighting and/or text formatting to convey which parts google considers "trusted." You could implement an "expert mode" button somewhere that turns all of these "improvements" off. You could add an extra field that shows security-critical information not just about the URL but about the resulting connection(s) to the host.
This is hardly exhaustive, which is also a good description for Google's effort on this one.
I appreciate your effort but I have to say, none of those are would remotely constitute usability improvements for the average user. Your proposals all involve increasing the complexity of the information presented to the user. This has been the root of bad UI for ages. The harder design decision, and what our industry has slowly gotten better at, is how to reduce information.
If you care about usability this is clearly an improvement.
The cost/benefit doesn't work out. It's a usability improvement, but it comes with a huge cost, where many, many sites have the fundamental function of a URL -- that of an address/specifier across the internet -- basically broken. The person who implemented this change either didn't work out the cost, or decided they didn't care.
Everyone is hyperventilating over contrived "the principle of it all" examples.
How would you feel if primary keys in your database started to change their semantics? How would you feel if your phone started to change the telephone number it dialed? I guess we're kind of alright with the message app editing our messages as we type them. Now we're supposed to adapt to our tools, not the other way around, and insisting on tools doing what we say is "pedantry."
> How would you feel if your phone started to change the telephone number it dialed?
If they replaced the country code with a flag, that would be an improvement. If they replaced the carrier number with the logo of the company that's also not bad. These are UI changes and the one you proposed about the phone numbers are great :)
If they replaced the country code with a flag, that would be an improvement. If they replaced the carrier number with the logo of the company that's also not bad. These are UI changes and the one you proposed about the phone numbers are great :)
If I were to propose a change, it would be 1-to-1. What happened in Chrome 69 isn't one to one. It's a surjection, which broke lots of URLs. That's the difference between a great UI change and an inconsiderate, idiotic one.
Again you are shamelessly arguing "the principle of it all" without actual real world examples of problems.. Cite some real problems and we can have a more productive discussion. It doesn't help to falsely equate hiding "www." with corrupting phone numbers.
Cite some real problems and we can have a more productive discussion.
Already cited elsewhere.
It doesn't help to falsely equate hiding "www." with corrupting phone numbers.
It's a valid comparison. Both are specifiers. It's the same sort of shenanigans with poorly planned "abbreviated" phone extension dialing and outside number prefixes in my company's office causing wrong numbers and accidental 911 dialing.
What are the improvements of hiding part of the URL?
>Like, who actually runs separate HTTP servers on example.com and www.example.com anyway?
My university doesn't work with example.com but does work with www.example.com. I see how someone could waste a lot of time trying to solve a nonexistent network issue because typing example.com doesn't work, but he had seen it working before.
No this is Google (and others) attempting to fix confusion caused by developers making systems that are hard to understand for users. Nothing is stopping developers from redirecting or setting up DNS to make the two domains go to the same place.
I personally feel like any negative user experience should be addressed by the developers of the website, not the browser.
Meanwhile I'm sure there are people out there who want to maintain a distinction between www.x.y and just x.y.
Exactly, thank you!
I think first and foremost the browser should be clear and easy to use for the general user, not an easy to debug frontend for devs, if we do that the browser will be a mess. Devs know where to look to see what the real address is if they get reports of there site not working, and can then fix the issue on their end. It's not like the issue isn't something that you can't change on the server end.
The people who run the sites should do all they can to make the experience as hassle-free as possible.
This means no www.domain.com and domain.com going to different places, 99% of users will expect to end up on the same site regardless of where you add the www or not, to them it's the same as adding the htttps or not, it should "just work" regardless what they put in the front on the URL. (of course there will always be some edges cases, but I feel it's worth it for a better end-user experience.
Not sure why you are getting downvotes but I agree with you here. I think this change is going to benefit both technical and non technical users. How many websites make the mistake of not handling www and non-www the same? Or having a valid certificate for both domains? It's not just a pain for the non techies
> Like, who actually runs separate HTTP servers on example.com and www.example.com anyway? Everyone is hyperventilating over contrived "the principle of it all" examples.
I do. Over and over again. Having been doing that for 2 decades, and with a limited budget, I require things like SNI[1] to work. But of course, I could just triple the tech budget...
EDIT: And wrt "Time for a modicum of historical perspective." - please give it to me. I am keen to learn what I did wrong all the time.
I agree. I can't remember the last time myself or anybody else actually typed "www." when going to a website.
However, Chrome should handle this intelligently like it does with localhosts vs. internet hosts. If you manually type in "http://" into the address bar (e.g "http://my-local-domain"), it will not search the usual DNS and will honor any locally configured domains you may have on your network.
This doesn't seem to be the case now with this 'www' concern, though. With slenk's example of http://www.pool.ntp.org and http://pool.ntp.org - the only way to access the second link properly is to click the link. Typing it in the address bar loads the same website as first link - so it appears Chrome is automatically adding 'www' or somehow making the request differently making the second link's page inaccessible via the address bar.
Safari's URL masking is doing a terrible disservice to society by making people ignorant. Make the Web more usable by educating people, not by hiding basic information from everyone.
The thing is, there are people who care about usability for whom this is not "clearly" an improvement. If it was clear, we wouldn't be having this gigantic discussion...
I don't want anything about the Internet to be like television, and while I know you're trying to make an analogous statement I think the real threats to freedom on the Internet are coming from entrenched interests who would take what you say literally.
I am being literal. The internet is not about freedom. It's about making money and influencing thought and action.
Realize that the majority (and I'm willing to say probably over 95%) of people today us the internet on their phone. They have a few "apps" which they switch through by swiping left or right (changing channels). The use Facebook, Amazon, their chosen politically aligned news site, Google search, and a few other things. Probably not much more than the 6 channels of TV I had as a kid.
And the Google searches are rarely if ever for anything beyond the first few links or the sponsored content. Really, page 2 and beyond might as well not exist. The users don't go there. Google could just return results from major organizations and users would be fine with it.
I lived through PC era, from start to finish. It took me a long time, but eventually I realized that 99% of the population was never going to understand how computers work. Just the concept of a file and a folder is beyond most people (that's why phones don't have them). Once I gave up on expecting people to learn about computers, computing became much easier. Phones, tablets, and so on, for the masses, the desktop for me. And now I didn't have to go fix peoples screwed up computers - they don't have them anymore.
The importance of a domain name, much less a sub-domain, is now irrelevant. A search result, of of the first few, is what matters. You will never get users to understand a URL.
At one time the internet might have been for us hackers, but that time has passed. It's now the domain of the masses.
if anything, it should be time to declare that its a bit insane to expect lay people to be able to read from right to left and left to right concurrently, and understand all the delimiting punctuation correctly, along with unicode. its time to restructure url rendering to be right to left, and combine delimiters. dashes, periods, slashes, and subdomains and hundreds of tlds have made it too confusing.
https://www.google.com./ should become either https/com/google/www/ OR secure/com/google/www/ OR secure.com.google.www. Browsers can then just always bold the third word?
I think re-ordering like that would actually benefit technical users more, but it's probably too late to make such a change. I don't think we'd need that change for browsers to put the parts they consider most relevant in bold. That logic is simple enough right now.
really we are talking about rendering, right? not actually changing the protocol itself? (:lock: = lock unicode/emoji = secure http)
:lock:/com/GOOGLE/www/maps/hawaii
I am a strong proponent of the URL being part of the user interface. I should be able to manipulate what i request of a service by modifying the url. The text in the url should mean something.
Making the rendering that different from the actual URL would be very confusing, especially if it varied from program to program or within programs. Links would look one way on websites and then have a completely different form in the url bar after clicking them.
If you're not in Google's index, then it's likely you're not in Google's interest to use Google's product to visit your site, since the company prefers if you use their search rather than type links directly or follow links from other links, hence they want to kill the URL — https://www.wired.com/story/google-wants-to-kill-the-url/
What's your rationale for this? The removal of the superfluous www subdomain makes sense. It puts to actual unique site information immediately after the padlock.
Most sites either redirect to it away from www when you visit them anyways.
People have always said to host your site at www and redirect to it, something I have never understood nor heard a convincing argument for.
I have my servers configured to redirect visitors away from www.
From a DNS point of view:
you can't have CNAME's on your root (can be useful, especially in some DNS load balancing situations, or load balancing on third party providers)
From HTTP point of view:
A no-www domain might not be the best solution if you ever want a 'Cookie-free Domain' (static.) for images etc. which speeds up your site. If you start with a no-www domain you have to setup a different domain (no subdomain) for it: like sstatic.net for SO, ytimg.com for YT and yimg.com for Yahoo.
When the browser makes a request for a static image and sends cookies together with the request, the server doesn't have any use for those cookies. So they only create network traffic for no good reason. You should make sure static components are requested with cookie-free requests. Create a subdomain and host all your static components there.
If your domain is www.example.org, you can host your static components on static.example.org. However, if you've already set cookies on the top-level domain example.org as opposed to www.example.org, then all the requests to static.example.org will include those cookies. In this case, you can buy a whole new domain, host your static components there, and keep this domain cookie-free.
https://developer.yahoo.com/performance/rules.html#cookie_fr...
in my view having a www record (and no-www redirect) has more benefits that a no-www
> People have always said to host your site at www and redirect to it, something I have never understood nor heard a convincing argument for.
It comes from the long-lost days when people actually ran their own servers, and would have [www,mail,ns1,ns2,intranet,chat].example.com servers. If you already have a lot of special-purpose domains, it seems weird to privilege "www" as the real example.com, so the recommendation was to redirect from it.
So we should have a limited control medium requiring extremely high levels of capital and regulation to provide content in a limited set of locations that can only be consumed by the user?
I suppose this change does get to this exact point since it obfuscates the page location so far as to be totally inaccurate, just as channel numbers obscure the frequencies and data transmission methods do on television.
I really don't know why you consider that a bad thing. It's a little bit silly that users have to know or care about what a domain/url is. Like if a user is just trying to get to, say, CNN, can they not just type in CNN and have the browser do the rest. Why put additional barriers?
We basically have the system you describe. I type CNN and the browser brings up a page of things that match my query. The first one is CNN. If I visit CNN a lot, my new tab page will have a thumbnail of the CNN website and I end up with an even simpler UI of just clicking a picture.
It's about the principle. www is a valid subdomain.
Browsers are supposed to be as unopinionated as possible since they are browsers, not mediators, and their job is to implement the standards of the web.
No, but I want my browser to notify me that its blocking popups rather than just hiding them from me entirely and assuming I never wanted to see them anyway.
This is a fair point. A possible counterpoint is that one can draw a line at malware. A fairly strong argument can be made that controlling popups is an exceptional security measure that is the only way, or the most effective way, to control the spread of objectively dangerous software. It is also a matter of UI - popups could appear faster than a human could control them, so it makes sense to make popups opt-in (which they are).
No similar argument can be made about disabling "www." It is purely an opinionated decision amounting to "you don't need www" - assuming it's not a bug, of course.
I didn't realize that the standards of the web specified anything about how an address bar should be displayed (or, for that matter, that it specified anything about an address bar even existing).
> This document describes the syntax and semantics for a compact string
representation for a resource available via the Internet. These
strings are called "Uniform Resource Locators" (URLs).
Safari doesn't show you the url at all until you click either; so the URL bar is just a domain indicator -- in that case, it seems less objectionable that it doesn't show the FQDN, but instead something different -- copying the url from a screenshot was bound for disappointment anyway.
My site, although one can argue it is a poorly configured (although it is intentionally configured this way), is a great example of why this is problematic.
I never set up a www cname, but now unless you're paying attention (or actually reading error messages) you might not notice that the reason it failed is because you're not at the domain but on the `www` subdomain. The URL bar doesn't convey this information until you click it and then it shows the full URL.
It's really minor but I also don't see a good reason to do this. All the sites hosted by the company I work for are hosted on `www.` with the intention that it "looks more professional".
To be honest, I want to see the actual domain I'm on in the location bar, and not some obscured version of it. In what universe is hiding part of the domain an usability improvement? What does it improve? Confusion? Misinformation? Lack of trust in the location showed by the browser? It's so stupid I had to check the calendar to make sure it's not the 1st of April today. I didn't check it in Chrome though, for fear that some days might have been hidden away as they have been considered trivial.
You know, as much as I'm all for considering www a trivial subdomain, this should have had much more public discussion before implementing it into the most used browser in the world.
To be clear, the "www" is only missing from the URL that is displayed in the address bar, while the browser is still attempting to resolve and fetch the URL with the "www." So instead of something that would completely break many websites, this is just something that is as confusing as hell for no good reason.
When a website requires "www" to resolve and respond, who in their right mind would want it hidden in the displayed URL?
URLs can be complex and sometimes not intended for human consumption.
So fine, don't display the URL. Define a full-on standard that lets a user agent show a "friendly but not unwieldy" name to the user. You have that funky Graph API or whatever Facebook started to allow URL posts in Facebook to expand to include site info. Why not just use that at the top of browse?
Then design the browser to allow access and editing of the URL somehow if desired (an old browser called 'OB1' didn't display the URL unless you pressed Ctrl-W, for example), but don't display it by default.
The truth is probably not many technical people directly enter URLs in browsers any more.
Honestly maybe it wasn't a good idea to allow query strings in URLs. Perhaps deprecate them and keep URLs looking like hierarchical filenames. Which honestly, is still something difficult to understand for non-technical people.
I'm much more okay with Safari's implementation, because it's significantly more discoverable. Safari also makes it much simpler to disable this behavior altogether, with it earning a place in the settings ... rather than being behind some "flag" that may disappear in some future release.
Google has signed a contract with ICANN as it is a registry and a registrar. In that contract it has agreed to limit behaviour that affects the security and stability of the name system. This behaviour of chromium is in contravention of that contract.
This is Google making subdomain usage decisions for other entities outside of Google. My domains and how subdomains are assigned and delegated are not Google's business to decide.
We allowed this to happen, we allowed Google to claim Internet and web, they make rules now, give up to and embrace hiding www. and to AMP before it's too late and you're out of the loop.
I can somewhat understand the urge to de-clutter the URL and emphasize which parts are most important.
But I'd prefer they did it in another way which doesn't involve hiding anything. For example, render the most important parts of the URL in bold type, a color that contrasts more, or a slightly larger point size.
That way, you still make it easier for the eye to pick up the parts that the browser believes probably matter most, but you don't make it impossible to see the parts that might also matter.
It must be this. They haven't given a single positive reason, except "users need not concern them with this information", which is so vague it's not even a reason.
Google is still (partly) a search engine, and people interacting with the address bar is pretty much the opposite of using a search engine. So Google wants to discourage that. They will never teach users how URLs work, how to read them and definitely not how to construct them, because it makes people less dependent on their search engine.
Think about it. Even the security part. Google wants to be the arbiter of what is secure and what is trustworthy. Anything that lets users figure this out for themselves, or that can be mediated without Google coming in between, takes away from this power.
They don't show http/https any more, just a coloured lock icon. The meaning of that lock icon is for Google to decide, unlike the meaning of http/https.
Now they take away information from the address bar, claiming it is irrelevant and none of our concern. They just want you to click on the links in their search engine, or some app or assistant thing.
One thing it is NOT about though, is distraction. If anyone knows that users can tune out irrelevant information perfectly fine, it's Google. They are an advertising network, after all.
The issue has only 46 comments at this moment because they have restricted the permission to comment, very nice. I thought there would be hundreds of, otherwise.
This should have been an opt-in feature. And if enough people opted in, only then do you make it the default, and when you do that, alert all users to that change.
Hiding the protocol was understandable. For web pages it was 99% http or https, and they could convey which with an icon. (iOS Chrome still shows it for some strange reason). Then mobile Safari dropped everything but the domain name and everyone freaked out. But you can get everything back in Safari by clicking on it, but not on Chrome. Such a simple thing to implement.
If anything they could have just hid the query string, unless it's fairly simple like HN comments pages (item?id=17927972). But with pushstate, you could just hide the ugly bits (tracking, language, etc.) from the user. Take this Chrome promo URL:
Not in favor of Google deciding yet again how I want to browse/view the web. I've been extricating myself from Google services slowly, and I think it's time for Firefox again. Kind of arrogant to decide something like this for more than a billion users, and a step backward for those of us in IT who have a hard time getting the user to understand a visual difference in the address bar.
If only we had something other than the subdomain to indicate what protocol we want to use to access the host in question. Like maybe some kind of... port number and/or protocol prefix.
"www." is useless. Eliminating it wouldn't be a bad thing.
"m." for mobile sites is extremely archaic. Sites should work on mobile without special subdomains.
If you issue subdomains to strangers on your root domain, the URL bar showing the subdomain shouldn't be your "security model."
The root domain is the root authority behind the site in question. Chrome showing users that information in a clear and concise way isn't "subverting the domain name system".
Safari has been doing this for a long time, and Chrome/Chromium have settings/flags that allow you to turn this behavior off. The amount of crying over this change is ridiculous in my opinion. As long as they keep this configurable (in fact, they should add a UI config for it) I see no problem here.
"www." is useless until you have multiple servers. Say you run a game server and a website on a web server for that game server as well. You need to give them separate domains so that they can resolve to separate IP addresses i.e. "www.example.com" might resolve to <web server IP> and "example.com" might resolve to <game server IP>. This is far more robust, faster and simpler (and how the system was intended to be used) than trying to set up some kind of proxying system on one of the servers to handle traffic really intended for the other one. In real world usage of course you'd probably put the game server at "game.example.com" and the web server at "example.com" but either way you end up with a subdomain, the arrangement with "www." as the subdomain instead of "game." is not really any more useless. I believe this can possibly be worked around with SRV records but then you have to rely on whatever your client might be handling SRV records.
> If you issue subdomains to strangers on your root domain, the URL bar showing the subdomain shouldn't be your "security model."
What do you propose instead? Azure gives me custom subdomains for my servers that are immensely useful, being much more regular and memorable than IP addresses. I only see two solutions here: stop doing that or let anybody with access to subdomain creation pretend to be "azure.com". One of them is comically dangerous and the other is throwing out a perfectly good feature to save a handful of characters in the URL bar.
The point of this change is not to mess with www. or m. any other subdomain. The real reason for this change is obviously from latest google moves is to hide one and only one subdomain from users - amp. They really don't want you to know when you are not on independent website but actually inside google silo, playing by google rules.
This is such bullshit. They call it a "trade off", but a trade off between what exactly? On the one side there's a non-trivial number of negative consequences to this change, and on the other hand, the maintainer person literally is unable to give any other reason besides, "this isn't information that most users need to concern themselves with in most cases".
No other reason given besides that very vague assumption about .. I'm not even sure, say if most users need not concern themselves with information that isn't there, does that mean they would otherwise be concerned about it? And what does that mean? And how is it bad? And why don't you just say so?
I'm calling bullshit. If there is a reason why they decided this, it is not that. My bet it's something political.
Surprised to see not a single mention of Brave [1] in this large thread of comments as an alternative browser. It's not the best for development, but for casual browsing it's great. Brave blocks all ads and trackers.
As a developer on multi-tenant applications that are mounted on different subdomains, considering `site.com` same with `www.site.com` should be outright illegal! There is nothing trivial about the domains above and are considered separate unless configured to be pointing to the same record.
This is only visually what's displayed in the address bar, similar to how they started dropping http/https from the start of URLs a while back.
While I'd personally disagree with these kind of changes, and I would always prefer it to show the http/https and the www dot, it doesn't actually break things like copy and pasting the URL from the address bar. You still go to www.google.com when you type in www.google.com, it's just trimming away the www dot displayed in the address bar.
I think for the vast majority of non-technical people it's not really going to make even a little bit of difference. If anything it'll make it easier to spot what domain they're on, assuming they're even checking to begin with.
Have any of you with corporate-type proxies been seeing authentication issues? Since moving to Chrome 69 a number of our users are reporting repeated proxy auth prompts when this should normally be handled transparently by Kerberos.
If you serve materially different content based on small differences, that's user-hostile, and common tools have no obligation to support you. Chrome shouldn't cater to sites that change behavior based on a www prefix, because the vast majority of users expect those sites to be the same. Hiding the prefix won't change their mind.
Small differences? A subdomain or an entire protocol is a small difference? And what about any web site that uses a "m." subdomain that doesn't intend it for mobile? I don't think that calling these things "small differences" is reasonable. And really, it makes perfect sense to not serve the same content over an insecure connection than over a secure one, but thankfully that change hasn't happened!
I agree, but Google doesn't just get rid of prefixes.
news.ycombinator.www.com would get normalized to news.ycombinator.com (not sure what would happen when using the www tld)
That's an extreme example, but there's an obvious security risk. This is a half assed change with buggy behaviour, regardless of whether you think www.domain.tld is materially different from domain.tld or not.
This should be the top comment. Users (including technical users) all behave weirdly by writing "www." depending on their mood or previous experiences. Developers setting up webpages do the same and often manage to screw up a CNAME on the www subdomain or to have two valid certificates for the www. And the non-www. Also, once you rotate cert you now have two to rotate. It's a mess and chrome's change is aiming at normalizing this. The complaints make zero sense.
So now they've added "Restrict-AddIssueComment-EditIssue" to the bug. [Edit: This means that only users with EditIssue permissions may comment on it] Apparently we've complained too loudly for their liking.
Thank you to whomever submitted this. My main annoyance is that when I want to go to a subdomain, I'll typically start typing foo.domain.com, but now the browser sets it to www.foo.domain.com. Annoying.
Let me clarify: since the subdomain is hidden, I now have to munge with the URL input field to just get the full url to show up so that I can then modify the subdomain.
If I was designing the user experience for accessing content from a global information, entertainment, and commerce network that would be used by a large, global, non-technical user base, I would not use the current URL format.
I would wager the twelve character prefix "https://www."would make no sense to 99.9% of the users and should be the default for over 99.9% of requests they would be making any given day.
I am guessing Google has evidence that this is the case.
Between this and AMP it looks like google's abusing their market dominance to seize control over the open internet.
We were able to somewhat break up microsoft's attempt at something similar in the 90s through anti-trust legislation, perhaps it's time to look at google?
On a side note, it seems like the openness of the internet has come under attack more and more often in the past few years, from a combination of corporate and government entities. It's absolutely essential that we keep the internet open, transparent, and free.
This isn't entirely without precedent. Firefox does something similar by greying out the `www` in the UI, Chrome just decided to take things a step further by hiding it entirely.
Firefox's behaviour is that is makes everything except the eTLD+1 grey, because that's what's normally useful for evaluating authenticity. There's no distinction made between `www` and any other subdomain.
I don’t understand the war on URLs. I can’t even edit the link a bookmark uses in some browsers now, which is infuriating for SharePoint or other sites where “currently displayed URL” is not a clean and canonical URL (even though one exists). And when the terrible random URL jumble currently in the address bar won’t even reproduce the page correctly later, this ceases to be a bookmark.
URLs may be weird but they are not so complex that we need these games.
This is a classic example of techies, marketers and MBAs deciding what is "too confusing" for "normal" people to understand.
This is why the History channel shows reality TV instead of programs about, you know, history. It was decided that "history" that doesn't involve celebrities, aliens or Nazis is just too boring for normal people.
Google has decided that the 50% of population who are confused about the basics of URLs are more important than the rest of us who have been using them for over 25 years without problems. This will in theory allow Google to target a larger swath of the population. In practice, it simply takes something useful away from the rest of us.
It's "digital ochlocracy": The targeting of the least common denominator to the detriment of those with greater knowledge or expertise.
This is actually a lot harder than it sounds. You can CNAME www, but you can't CNAME your apex (some DNS providers pretend you can, they will resolve your CNAME for you and provide A/AAAA records to recursive resolvers -- but if you wanted to CNAME to a DNS based load balancer, that doesn't really work)
This causes browser security problems, it creates load balancing issues, it makes failover harder, etc. Only works for small sites that don't need to be highly available.
According to this blog post[0], IE misbehaves with cookies and allows breaking SOP.
The spec says that a cookie flagged with the domain "domain.com" is supposed to only be valid for requests to "domain.com" and not "subdomain.domain.com". A cookie that is intended to be valid for all subdomains is supposed to have a preceding dot, like ".domain.com".
Older versions of IE (That may still be in use) will treat "domain.com" cookies like ".domain.com" cookies, allowing malicious.subdomain.domain.com to access cookies only intended for "domain.com".
I don't understand your argument, unlike "http://" the www subdomain had no real technical implication, it's just a common naming choice for web servers.
Spirited discussion! I think for the average Joe, this change does not matter (i.e neither good, nor bad). Two better changes: a) highlight https / secure connections even more prominently, and b) detect and highlight misspelled domain names (a major cause of phishing attacks). Rationale is that for the average Joe, a clear warning that they are about to go to a bad site is more valuable than looking at the accurate domain name.
Anyone know if this checks for a DNS record that supports equivalency between the www/empty subdomains? It's possible for them to have different values.
Whether "we" like it or not, URLs are increasingly becoming a rare techy term. It is no wonder the average user has a problem with URLs with the advent of omnibars. I for one can't sometimes "force" Chrome to go to domain.com instead of searching that expression. Oh, and try to view the full URL you're accessing on your iPhone, particularly if you're GETting something other than /.
Looking at everything that Google is doing at the moment, it looks like a shameful power grab for the entire WWW.
I suspect that Google wants to gradually get rid of URLs, because if users can't identify websites by their addresses, they will be more dependent on Google Search. Landing on search pages means that users will click on more of their well-concealed ads.
I think it's also the reason why Google Chrome's URL bar has such terrible auto-completion functionality. It appears designed to send users to Google Search to click on ads instead of taking them directly to their destinations. Firefox's old address bar didn't do that. (The newer address bar in Firefox appears to be following Chrome's time-wasting functionality, and its auto-completion no longer works well.)
Without URLs, users won't know whether they are really on your site, viewing your content on google.com (via AMP), or in some kind of app or PWA. To get to an item of content, most people will end up clicking on an ad in Google Search or land on an AMP page that eliminates most monetization options that don't involve passing the money through Google's systems.
URLs shouldn't be trimmed at all. When you're on the URL, http://example.com/, (note the trailing slash), Chrome will only display "http://example.com", even if you copy it onto the clipboard. (It's still possible to fix it in Firefox with about:config.)
Technology shouldn't be made more restrictive and stupid -- users should be taught how to use technology correctly, and they will adjust. I've even met professional programmers who don't really understand URLs because their browsers mask the real URLs. The WWW isn't only about consumption -- it's about creating as well. People need to know how it works in order to create things.
When people say they're sick of Google deciding what's good for the whole internet and leveraging their massive marketshare.. it's stuff like this they're talking about. Other little things too like their own non-standard implementation of IMAP/POP stuff. Invisible to most average users, who continue to merrily click away on Google $EVERYTHING and Gmail.
Surely Google of all companies has the resources to see exactly how much disruption this will actually cause. And I don't believe they get to make the decision based on that, but they should share the numbers. Any domain that returns different content between "m." and "www." vs. the apex domain are affected.
This is kind of what happened to the KeeFox addon for Firefox.
The developer one day decided to have a new default for URL matching. Instead of matching the hostname, i.e. old.reddit.com, it would then match the 'domain' i.e *reddit.com.
Back then it was as stupid as this is now...
I am in favor of the change. I do not think hiding technical information from non-technical users is a bad thing. Making it impossible for technical users to work it out is the bad thing.
If technical users want their own browser mode, where all these things are readily available, then I'd be all for that.
Clicking on the browser location bar should reveal the full URL. But the noise of the protocol, the www. prefix, and the param string is all irrelevant to the experience of the casual user of web browsers and should indeed be hidden. We're already hiding a ton of stuff from them already, if you want to go see it, you can always "view source".
One argument in the thread was that 'www.pool.ntp.org' and 'pool.ntp.org' go to a website and a nameserver respectively. Nontechnical users will never want to use a web browser to access a nameserver, so there's no UX benefit to distinguishing them. From a URL entry standpoint, it should go to the URL that was typed into the location bar. There isn't a conflict.
It breaks how the web is supposed to function. There is no consensus on the utilization of subdomains and which ones are considered to be trivial. I could write an app which maps usernames to subdomains and Chrome would break this if a user had a username which was in the trivial list. I think Tumblr is a site which does this. It really isn't about technical or non-technical, it is about what subdomains can be used for.
Edit: well I was right, this breaks in the new Chrome, http://m.tumblr.com/ , tumblr usernames on the 'trivial' list are broken.
There's no such thing as "how the web is supposed to function." There is an ideal that the plurality of the early web once sought to chase. There are more ideals that the more commerce-friendly users and creators of the web started to chase.
The norm is always shifting. It's our job as technologists to accommodate how people want to use technology, not to tell them that the way they want to use it is wrong. Because if we try to tell them no, then they're going to use the freedoms we labored so hard to give them to stop listening to us.
It’s not that technical, though, to people urls are just strings. Why would people care if they can see the www or not? If anything, this is likely to introduce more confusion.
The implementation will definitely need a up to date Public Suffix List. We control a bunch of public suffixes and it's a huge, huge difference if you browse to the address "www.com" or to "com" - or blogger.com and www.blogger.com, or dyndns.org vs. www.dyndns.org. In this case, there are different organizations/persons controlling these.
Should you not know which one you have connected to?
I think an exception list rather than a straight map is all that's needed. If you want to buck the convention, fine, don't expect the browsers to know what you're trying to do, so update this list, which takes 30 days to propagate, and you can have your fun.
True. The www. subdomain has been irrelevant for a very long time now and a great source of confusion for both users and developers who never know how to set their domains.
From the chrome flags [1] page: "On Mac, this enables MacViews, which uses toolkit-views for native browser dialogs."
What is MacViews and how do I check if it is enabled?
It's 2018 ffs and internet is everywhere. People should be expected to have better understanding of the internet and of what they see on their address bars.
Maybe instead of masking users from the "complexity" we should be teaching them instead.
On a completely unrelated topic, there is a bug I'm noticing: ICANN doesn't allow emojis in TLDs.
What? That's ridiculous-- I can put them in the main part of the domain name along with an enormous number of cute homoglyphs. So why not smileys and homoglyphs in TLDs? If users can deal with smileys in the main domain that tells where the site is, surely they can deal with them in the little extra part that comes at the end. Everything's just .com underneath anyway so what would problem be?
Google has done the work to remove the unsafe part of domains. Now our domains can really shine. So give us more bling to put in there, ICANN! C'mon.
Also-- why not www as tld? I love "biz" and "pizza" as much as anyone, but excuse me they got nuthin on the world wide web. C'mon ICANN give us "www" in the Trailing Little Domain! We're ready for it now! :)
So apparently when Google said they would like to "fix" URLs, what they actually meant was that they would like to break them, whine about how broken they are in the future, and then come up with something worse.
Although a firm NO-WWW believer this is hilarious because it is not the same as hiding http(s):// this is technically speaking an alternation of the url.
Still..with Google's horrible AMP efforts this seems nice :)
I would love to get rid of www but this going to confuse the heck out of people. We could get rid of www if more DNS providers supported ANAME. And Google Domains does not. How bout start there.
Curious, why did www.foo.com become the standard instead of using the apex/root foo.com domain? Was this because of the DNS limitations of using CNAME records at the apex/root?
Because HTTP was one service among many, not the primary one, and failed to use a different DNS record than A to look up the target, unlike other protocols which manage this redirection to the "$protocol server" that way instead of through a subdomain name. (E.g. mail has MX records to find the mail server, many other protocols use SRV records to discover their specific targets)
Instead, what they ought to do is go through the top 10,000 domains and test to make sure that:
1. www.* doesn't end up as a 404 or DNS resolution error
2. www.example.com matches the content from example.com
3. www.example.com -> example.com is a valid permanent redirect
and then finally
4. Publicly shame them. Don't hide it from people because you run into all sorts of UX issues.
I really hate www. for a multitude of reasons, but the biggest one is just thinking about the amount of time spent saying it out loud. Nothing aggrivates me more than turning on the radio and hearing "go to our website DOUBLE YEW DOUBLE YEW DOUBLE YEW DOT <some long name like nelsonfurnishingandrepair dot com>". That's like two seconds of air time they could recover.
Firefox is my primary browser, so it doesn't affect me, however this has already caused confusion within the web agency world I work in, both with technical staff and clients.
Many not tech-savvy users I know, never use URL's. They use the address bar for doing a google search and then they click on the correct site in the search results.
The worst case of this that I've seen is with Safari on mac. The domain gets completely hidden if it's an HTTPS site that is "verified" (i.e. the entity shows up in Dunn and Broadstreet)
Visit "www.apple.com" in Safari and you will get "Apple, Inc" in the URL bar after a moment.
This seems like a potentially reasonable choice until you realize that someone could register, say "Apple, Inc" in another state and get a HTTPS cert to match it. One phishing email later and the Safari user is convinced they are logging in to the correct domain.
I can see lots of new phishing attacks made possible because of this. :(
it's true that this change is going to break some stuff for some people, but i welcome it. www has to go. it provides no meaning (as opposed to john.example.com) and it serves no purpose, we have schema in URLs to figure out the protocol and port.
There's a particular 8-letter, simple subdomain for my domain (.com) where Chrome treats it as a Google search. Doesn't do it for any other subdomains. It makes no sense.
Just updated to see it in action. Seems like a nice improvement for end users. Don't think there is any reason not to do this, other than a nostalgic desire for things to stay the same. Most sites already have a www. to . redirect in place and if you don't its a trivial change.
> Don't think there is any reason not to do this [...] Most sites already have a www. to . redirect in place
Most. Not all.
The browser is doing something out-of-spec for really no reason at all and it has the chance to break some sites.
www.domain.com and domain.com are never guaranteed to go to the same page. Yes, out of modern convention, they do now, but it's only a convention. Sites are free to break it, and some do, so to make this change is a bad idea.
The obvious difference is that that's higher up in the hierarchy rather than lower in it. www.example.com and example.com are controlled by the same entity; example.com and example.foo may well not be.
I'm assuming you can always click to expand and show the full URL.
For a lot of people (devs especially) it would be nice to be able to change an option to see the whole URL at a glance, but for most people it's just a confusing mess they have to look through for the domain name.
Is anyone else starting to see parallels between Google's attitude toward "progressing" Chrome and Microsoft's attitude towards IE in the 3-8 version days?
Since everyone is wondering why, and since I happened to stumble across a reason during my time as a pentester, here you go:
Spearphishing is still one of the most common ways of breaching a corporate network. If I target you, you will likely fall for one of my attempts. If you are a company rather than a person, my odds go way up, because I have N chances to trick someone rather than 1 (where N is roughly the number of people at the company with email access).
This is one of those things that everyone says "Ha, I'm smart. I'd notice. You can't trick me."
And maybe you are. But you're also distracted. And that's my greatest advantage against you. All I need is to sneak in an unexpected Github prompt that looks completely authentic, and now I have your Github password. Wanna bet you don't have 2FA turned on, even though you know you really ought to? And even if you do, it's getting easier to social engineer your way past AT&T's lovely customer support: https://www.youtube.com/watch?v=caVEiitI2vg
Ok, what's the point?
This: Every character in the URL bar unrelated to the apex name is a deadly distraction.
Right now, how do you know you're actually on HN instead of some knockoff? "ycombinator.com".
How many characters do you have to read unrelated to that? "https://news. /item?id=17927972"
The most vital part of a URL for vetting identity is also, usually, the hardest to see.
Now, I don't know whether google made this change in order to assist with this. But it's one possible justification, and a step in a good direction.
We may not like it, just like we didn't like when Google removed the clickable "cached" links from search results, but in this case consumer protection outweighs our urges as a power user.
Wasn't this (partially?) solved by making the domain black and the rest of the text gray? That's how it appears in Firefox. It makes it very easy to see "ycombinator.com".
According to the PSL their list is also used for this purpose in Internet Explorer, but not other browsers.
(The Public Suffix List is the Mozilla project that knows .co.uk isn't the same kind of thing as .google.com even though they both have two DNS labels. These days every non-crap browser uses it to restrict cross domain cookies but they aren't consistent when it comes to other features)
Is this specific to a version of Firefox, or an extension?
I'm using Firefox 62.0, and I do not see a difference in color between the domain and the rest of the URL.
(Windows 10, 1080p screen.)
ETA: Holy crap, if I zoom into a screenshot, I can see the difference. My eyes cannot benefit from this feature under normal circumstances, though. Looks like black (#000000) and gray (#807D7D).
You may want to check your monitor's brightness/gamma settings. That's a major difference to be invisble, and there's many sites that use light grey on white which are much much closer than this (sadly :( )
I get #9D9D9D on Firefox 61 on Ubuntu (1080p). It is perfectly clear to me.
If you have trouble distinguishing between the grey and black, perhaps file a bug report requesting to slightly lower the saturation of the grey? Or perhaps your browser's theme is using that darker grey?
Do not hide the relevant info. Nearly every character in the URL is relevant info.
Instead, make the key part stand out, so even a cursory glance catches it instantly. It still allows more careful examination without clicking anywhere, or second-guessing.
Also, detect anomalies like www.google.com.hacked.domain.wtf, and show them in a really contrast way. Both Chrome and FF do this already.
I think Firefox shows normal URLs about right; they could add even more contrast.
I was just looking at the URL bar in firefox and thinking yea, I know it's ycombinator.com because it's right there, and there's a big green lock on the left.
Google can do what they want with Chrome, as far as I'm concerned it's the new IE.
> This: Every character in the URL bar unrelated to the apex name is a deadly distraction.
I'm not convinced. When your tools lie you (as this does), you are LESS likely to notice a problem when you're busy. Another way to view this approach is:
> Every character in the URL bar different from the actual URL is a deadly lie.
Distraction is a problem, I totally agree, but other methods like Firefox's color-shading seem to work just fine.
if the idea is to protect users so that you don't end up clicking on https://news.ycombinator.com.myhackerdomain.com , you then open the attack of a platform where they offer custom subdomains, and you have
if I make them look the same, and the address will hide the subdomain, it looks like a step backwards in securing the web
now, imagine the actual platform has a payment section, and I create a fake subdomain that looks pretty similar, email you, boom, I get your cc info because I tricked you into entering new cc info (assuming your scenario of someone being distracted)
Only supposed "trivial" subdomains are hidden, such as www. and m.
Anything else is still shown. fake-original.blogger.com will still show up as fake-original.blogger.com because fake-original. isn't a trivial subdomain.
I still think it's a stupid move, though. It's a simplification that is incredibly unnecessary and may be harmful when dealing with the rare site that doesn't treat www.domain.com and domain.com as the same.
Flag websites that look like phishing URLs - websites that contain domain names of popular websites in their subdomains or other parts of the URI. But initially, don't do anything. it could be harmless. AMP has domain names in the URI, right?
But, as soon as the user starts typing into a text-entry field (especially a password one), you bring a pop-up warning them that this might be a phishing site.
Funny that you mention that, i never noticed, but firefox displays "ycombinator.com" in the URL "https://news.ycombinator.com/" in black, while other parts you are talking about are light gray. Seems more reasonable then hiding parts.
Once you get as big as Google or Apple, you're too big to fail and there is no longer any pressure to make good decisions because there are no actual consequences for making moronic decisions.
I had a strong gut feeling Google has one or multiple ulterior motives for doing this - motives that aren't very user-friendly (as they claim) but more Google-friendly. I thought it may be so that it can track every link you type in Chrome without 99.9% of the users being aware of it. Right now Google does that only for "searches" in the omnibox.
I hadn't thought about the amp angle, but that could be another reason for pushing this, too. Perhaps there are more we won't know until it's too late to do anything about it (other than changing browsers, of course).
I'm pretty sure the eventual plan is to force everyone to browse the web using a version of the App Store, which we all know is incredibly secure, and never difficult to use.
It seems that "m." is also considered a trivial subdomain. So when a user clicks a link to a "m.facebook.com" uri, they'll be confused why FB looks different when the browser reports it's on "facebook.com".
I sincerely hope Firefox doesn't follow suit.