This technique is widely abused by phishers. Most browsers detect such phishing attacks and warn the user for it (see example in Safari 5).
Firefox might do a better job on this subject: it performs a HEAD request first, to see if the website actually requires authentication. If not, the user receives a warning to make the user aware of a potential phishing attack they might have been trapped into.
That could be easily spoofed by requiring just some username on the server side, assuming you set up your web presence such that these links always include some username. The HEAD request won't help you there.
This won't help that much, because this means you can only visit the website with some authentication string, otherwise the browser will prompt for your credentials.
The userinfo component is supported according to the URI generic syntax, which FTP URIs follow, but HTTP URIs don't.
See the sadly still not obsolete RFC1738:
http://tools.ietf.org/html/rfc1738#section-3.1
RFC 2616 also says "For definitive information on URL syntax and semantics, see "Uniform Resource Identifiers (URI): Generic Syntax and Semantics," RFC 2396" and it is frequently documented on the interwebs that HTTP uses the generic URI syntax.
I'm not trying to suggest that the intent of the RFC was to allow <authority> where it says <host> in 3.2.2 but I can see how this wording might be confusing, especially when taken with historically observed UA behaviour.
I remember reading about Microsoft stopping supporting that option actually. The reasoning being that it could be used for sending users to malicious sites.
"malicious users can use this URL syntax together with other methods to create a link to a deceptive (spoofed) Web site that displays the URL to a legitimate Web site in the Status bar, Address bar, and Title bar of all versions of Internet Explorer."
I'm wondering if this was the reasoning for doing so in Chrome.
I get it if Chrome developers want to stop spoofing of malicious urls like http://veryfamouswebsite.com:pwd@malicioussite.com.
However, I prefer the Firefox way: FF prompts you if you want to login with the username specified in the url.
That's cool, but how many users will understand what's being asked for? If the user says yes (and users will always just click the "go on do it" button), they basically agreed to being tracked indefinitely :-(
Wow, BasicAuth is quite unusable these days.
Disallowing BasicAuth in typed-in urls is just one side. The other thing is that I can't use BasicAuth to download files on Android[1] (and that bug is 4 years old)
http://www.123-reg.co.uk/ is a pretty big domain management company and they still use basic auth. I actually don't think users care. It looks a bit shoddy, but it works fine. Most people care more about the price of the product than how the site manages authentication.
I think that's a bad example to hold up, there's a repeatable issue with 123-reg's basic auth system that stops you from being able to fully log out on certain platforms.
Basic auth has no logout feature as far as I understand it. 123-reg tries to hack on something which looks like a logout feature but isn't.
For the point I was trying to make, it was the perfect example. It demonstrates that you can use basic auth, have "working" authentication, and be a successful website.
Note, the comment I was replying to was that basic auth is "unusable". 123-reg clearly disproves this.
And I cannot dispute any of your points, but the inability of basic auth to provide logout is certainly a concern that must be factored in to any decision making process. I personally love using basic auth in projects when I can as it just works for the majority of single user requirements.
While I understand their concern, it is none the less quite a useful feature for testing.
Firefox warns users if they are authenticating to a site with a small message and asks them to confirm. This prevents pretty much all of the phishing attacks that people have been talking about without removing a potentially useful feature.
Ah! In many company (I have seen it in huge and tiny companies) this will break their SSO 'systems' :) Intranet pages covered with links to internal systems like this are rather normal.
This may be slightly OT but having the auth in the url has some benefits when using a handler for certain file types. The Android Browser violates this if you go to a site requires auth and then links to video/audio. When the media player is passed the url, the credentials are lost and the media fails to play.
dXNlcm5hbWU6cGFzc3dvcmQ= is base64 encoded 'username:password'
The Authorization header is the same one that will be used if your browser prompts you for a username/password.
Also, the Authorization header is also plain text. Whether you pass credentials on the URL or in the Authorization header, it makes little difference in terms of interceptability. They're just different lines in the HTTP request. You need SSL to secure either.
Well, basic auth is plaintext over the wire anyway. I don't think that you lose that much 'security' by putting them into the url.
That said, the most common argument that I heard against supporting this stuff is phishing, using the credentials part to push the real address as far as possible to the right/off the screen.
I - don't like that decision. I have basic auth services in internal networks and would like to access them this way.
Conceptually, HTTP/TLS is very simple. Simply use HTTP over TLS
precisely as you would use HTTP over TCP.
TLS is meant to be transparent to the higher level protocol - in other words, independent of HTTP. The headers should be encrypted also.
For a long time, I also believed that the URL would not be encrypted (e.g. GET /example/url.htm), but as it's running as a transparent connection then this is also encrypted. I had to check this to see if GET requests with session IDs would be vulnerable over the wire - I was quite relieved when I realised my assumptions were wrong! Also meant I had to read a few specs :-)
As an aside, everyone has blidnspots and continuing to downvote howardr does nothing but make you feel better about yourself. This bloke's total karma is down to '2' and I'm betting it's much higher than that minus this post.
Not nice of google to try to change the ways of the web without prior discussion. Some people may store these URLs in bookmarks for convenience. What's next? disallow the javascript: protocol maybe?
Since Chrome's addressbar functions as both a search input and an address bar, is it possible they're doing this to prevent usernames and passwords from being continually added to Google's search history logs? I see both sides of the argument for allowing or removing this functionality ... but the fact that user input from the omnibar is passed along to Google in real-time changes things and potentially opens up Google to receiving usernames and passwords to sites they have no business having access to.
Usernames and passwords can be added to links to resources (and images) which aren't necessarily protected by authorization, and in that sense can be used as a "cookie" of sorts to track users irrespective of cookie settings. It's a bit hokey, but I've heard of it being done before (more than once, actually). Try it on your Web server, and dump the headers for your image request after you serve this:
OK that seems like the sort of situation you'd want some sort of privacy controls in but this bug is about it getting ignored when you type it into the URL bar yourself right?
The ipad browser pops up a red phishing warning when you access a bookmark(!) that has credentials in the URL.
This is royally annoying because the same browser can't save these credentials otherwise.
So for sites using simple/digest-auth you're given the choice
between typing in your password every time, or getting that stupid warning dialog every time.
What's wrong with a browser allowing an auth URL to be typed directly into the address bar or clicked from a Bookmark, but disallow it in any other form?
I'm providing a reason for the change, not a reason for Chrome's particular choice. Not trying to debate the bug itself, just providing a reason that it was addressed.
If those rewriting rules have to be server-side, and I think that's what you mean, the answer to that is that browsers don't get to dictate terms to servers that way; browsers can say what they will and won't accept, especially in terms of content type, but URI format isn't generally something the browser can specify.
Once the URL is "learned", the credentials will continue to be sent. Replicate that with query strings...
Edit: Thanks for deleting instead of just admitting you were, in this case, wrong. The comment here that I originally replied to called me wrong and said that query strings are equally efficient for this purpose.
Firefox might do a better job on this subject: it performs a HEAD request first, to see if the website actually requires authentication. If not, the user receives a warning to make the user aware of a potential phishing attack they might have been trapped into.