Hacker Newsnew | past | comments | ask | show | jobs | submit | Xk's commentslogin

There's been a lot of research about this recently (I've brought this up on HN previously). A team from Berkeley broke 50% of the 50 most popular extensions last year [1]. Some other researchers looked at explicitly malicious extensions [2].

One thing the authors don't mention (wich is brought up in the first paper): things can be much worse than this. If you inject untrusted content to the innerHTML of the extension itself, you've given an attacker the full privileges and permissions of the entire extension. This can be very serious if the extension has access to lots of data (e.g., many Chrome extensions have arbitrary access to all http and https webapges -- a bug in one of your extensions could give an attacker XSS on a user's bank).

The Google team has taken steps to mitigate this [3]. If you write a Chrome extension, you're being negligent if you don't implement a strong CSP. That will prevent the previous issue entirely, but not the issue brought up by OP.

[1] http://www.eecs.berkeley.edu/~afelt/extensionvulnerabilities...

[2] http://profsandhu.com/zhang/pub/ndss12-chrome.pdf

[3] http://blog.chromium.org/2012/02/more-secure-extensions-by-d...


> That will prevent the previous issue entirely, but not the issue brought up by OP.

Hmm, shouldn't even a normal CSP prevent the issue brought up by the OP? I haven't actually written a CSPed extension, so I may be missing a key problem, but my understanding is that no inline scripts will execute (only scripts from whitelisted domains), which would prevent an injection like that.


You would think that it would. But the way Chrome has implemented CSP, it doesn't. When you specify a CSP in a Chrome extension, it applies to the core extension (the background page) and all content scripts. It does not, however, apply to webpages you are running on.

So, if your CSP is set to deny all images, you can still add an image to a webpage. You just can't add an image to the core extension itself. Similarly, if you block eval() with a CSP, you can still eval() from the context of a webpage.


Ah, yeah, thanks :)

It seems like it would be difficult for the browser to prevent extensions from ever suffering from that kind of attack. "any script tag that wasn't there before this function call is not executable" isn't too bad of a start, but even a moderately complicated web app likely has places where you can inject content that will be operated on by javascript from the site itself, which then transforms it into an XSS attack. For example, many sites will sanitize any user input, but will trust the content of dom elements created by themselves, without thinking about the case where the contents have been replaced by an extension which hasn't been properly sanitizing its own output.

It should still be covered by any CSP that a site has enabled, though. That (and this article) is a good argument for everyone to strongly consider adding a CSP to their sites, so that inline scripts can't be executed, even if an otherwise well-meaning extension slips up.


Those interested in this should look at a paper from Vern Paxon and Nicholas Weaver:

http://www.icir.org/vern/papers/witty-imc05.pdf

A summary of it: A worm used a linear congenital generator to generate its randomness. It used this generator to pick which IPs to try to infect, which hard drives to write data to, and what to write. These researchers used a /8, and were able to use that to count, exactly, the bandwith of all infected machines, how many hard drives machines each had, the time they started up, and locate the exact machine which initially spread the worm. It's really quite amazing that you can get all of this from just packet captures, before you think about it.


congruential? :)


Very interesting, thank you for sharing.


You have a very severe security vulnerability on your site. Please provide an email address in your info I can contact you at. (The email field is hidden to others.)

Edit: interwho has fixed the vulnerability. There was a CSRF allowing you to take over someone else's account if they visited your site.


The site is asking for an email which also happens to be a PayPal account (Placeholder text is "Email (and PayPal) address..."), along with a password. The user is not a known quantity and it's his/her first submission to HN, it's very possible he/she is hoping that PayPal email addresses/passwords that you put in match. There is no HTTPS, no seals or verification, no guarantees of the security of any of your data and that your password is not being stored in plaintext. There is a security vulnerability and the site was a purchased template. It's quite possibly legit, but without more information I would avoid.


I'm bcrypting the passwords.

Exact method: sha1(bcrypt(sha1(md5.'othersalt').'salt').'anothersalt') and a few more salts + sha1s


Which means nothing if it is possible to snipe the passwords from the HTTP request. Firesheep, anyone?


You're right, I'll get a security cert + force ssl.


>You're right, I'll get a security cert + force ssl.

As someone who has worked in security for years--in particular, application security assessments--thank you for taking the sometimes hard-to-swallow criticism well, and deciding to actually fix things rather than just deflect the issue. You probably have no idea how many (even reputable) organizations decide to "accept the risk" and ignore security findings. (Edit: More so than the SSL issue, I'm talking about fixing the CSRF)

>The user is not a known quantity and it's his/her first submission to HN, it's very possible he/she is hoping that PayPal email addresses/passwords that you put in match.

There is nothing wrong with the logic in this statement, but you also need to be careful how far you take it. One could argue that any of the small "Show HN" posts around here are hoping to harvest credentials. In fact, I'm sure that some of them do. When using software as a service--or indeed, any web application--there is an inherent degree of trust behind it. Even if the user had made many HN posts, or not bought the pre-made site (which looks nice, IMO), or purchased an HTTPS certificate... credential harvesting is still a real threat.

Even bigger services that claim to encrypt password databases have often been shown to in fact do nothing of the sort (eg, sending password reminder emails etc).

This is why security guys worth their salt will always suggest using random passwords for every service you sign up for and keeping them in an encrypted file a la KeePass or a TrueCrypt container with a long, complicated "master password" for the archive. Additionally, it's always a great idea to enable 2-factor authentication where ever possible (for example, Google accounts).


If I didn't take any criticism, how would I improve? :)

As to the fact that this is my first post here, I've been a lurker for a long time, and finally had something good to post. I've been on reddit and a few developer forums under this username (and interwhos) for much longer.

Thank you for writing this.


Done. You can also contact me via the contact link at the bottom of the page.

Thanks for letting me know.


Thanks. Will do.

edit: support email sent


Got it. Thanks a ton!


Fixed. Thanks!


CSRF is NOT a browser vulnerability. The browser is doing exactly what it's supposed to do: load content. The browser can not (and should not) attempt to identify the "evil" HTTP requests from the "good" ones. The browser's job is to make requests.

Now, you could argue the browser's job should be to implement security features as well. It does, after all, implement the same-origin policy. But, if you think about it, there is no good way for the browser to fix the CSRF issue. You can ask the user, which is what's suggested, but that never really works. They'll do one of two things: click "okay" every single time, or stop using your browser.

I would guess well over half of all websites do one of the following: (1) load an external JS file, (2) load an external image, (3) load an external CSS file, (4) use an iframe which points to a different origin, (5) use a JS redirect, (6) use a meta redirect, or (7) open a new window.

The proposed "solution" to CSRF stops ALL of these use cases. The user would have to manually approve each and every one of them. Given that well under 1% of alerts would be true attacks, the user would almost definitely "okay" on the attacks as well: they would have been trained by thousands of other alerts that this is an acceptable thing to do.

There was a paper by Barth and Jackson on CSRF defenses where they propose an Origin header, but that's the extent to which security is implemented in the browser. It is fundamentally up to the web application for verifying the user did in fact initiate the request. No amount of code in the web browser can get around this fact.


>I would guess well over half of all websites do one of the following: (1) load an external JS file, (2) load an external image, (3) load an external CSS file, (4) use an iframe which points to a different origin, (5) use a JS redirect, (6) use a meta redirect, or (7) open a new window. The proposed "solution" to CSRF breaks ALL of these uses.

You definitely kidding me. Please point out where in my post I said to deny ALL requests. I was talking about ONLY POST requests. Probably I forgot to add it :) So, I'm talking only about forms sending and GET is ok sure.


Either you do it for everything, or you do it for only POST and you end up missing half of the vulnerabilities. Correct me if I'm wrong, but your CSRF attack used a GET request, did it not? [1]

Web applications make state-changing operations on GET requests. You might not like it, but they do.

[1] <img src="https://mail.google.com/mail/u/0/?logout style="display: none;" />


>Web applications make state-changing operations on GET requests. You might not like it, but they do.

but when developer made a mistake with GET it is 100% his problem - it's out of question. he should be punished :D


Nonsensical. CSRF isn't God's punishment for REST-lessness.


You're both just choosing different places to draw the line between developer responsibility and browser responsibility.


That is like saying "you're both just suggesting two totally different designs for the HTTP security model".

His model is wrong. Again: I assume he wants to know that, so, bluntness.


Perfectly solid web apps routinely ask browsers to POST to other domains.


So rather than deny ALL requests, I think it would work if browsers merely stopped passing cookies on cross-site POST form submissions, no?

Then if 3rd party sites wanted to still use form submissions, they could use an auth token in the form (though I'm unsure why they would do this instead of using JSONP).


Firefox already blocks off-domain POST requests, unless the 3rd party domain responds to an OPTIONS preflight request.

So, I'm talking only about forms sending and GET is ok sure.

Google's logout CSRF works because the logout link is a GET request. So, no, there is no quick fix.


No it does not.

---

https://developer.mozilla.org/en/http_access_control#Simple_...

A simple cross-site request is one that:

- Only uses GET or POST. If POST is used to send data to the server, the Content-Type of the data sent to the server with the HTTP POST request is one of application/x-www-form-urlencoded, multipart/form-data, or text/plain.

- Does not set custom headers with the HTTP Request (such as X-Modified, etc.)

---

This is actually a big deal, since it means you can send a cross-domain mutlipart-POST with no preflight. That allows for an effective CSRF attack against file upload systems.

And of course, cross-domain POST requests via <form> tags have always worked and will continue to work.


Am I missing something here?

Let's say you're logged into Gmail and Gmail had no CSRF protection anywhere.

You're logged in while visiting my site. In my site, I include a little bit of JavaScript to make a POST request to Gmail telling it to forward copies of all your incoming email to my email address.

This will not work even without CSRF protection. It would only work if Google sends back the header Access-Control-Allow-Origin: mysite or Access-Control-Allow-Origin: * as noted in the section you linked to.

Of course, I could also try to trick you into filling out a form whose method actually is pointed at Gmail's and include all the hidden input tags to set you up for forwarding emails to me, but you would know something fishy is going on because it would redirect you to Gmail.


"This will not work even without CSRF protection."

It actually will work.

What you're describing is what's known as a "simple" request in XMLHttpRequest terms. That means there is no pre-flight necessary. Your browser will simply make the POST as requested and receive the response. It won't make the response available to you since the Access-Control-Allow-Origin header isn't set, but you're a malicious attacker in this example and you don't care what the response is: you just care that you were able to make the request. ;-)

You could even do this by creating an HTML form that POSTs to the right URL and using JavaScript to submit it automatically when the page loads. Same exact thing: no CORS checks.

If a pre-flight were necessary you would be right. The browser would send an OPTIONS request to the server, the server would respond without the appropriate headers, and the POST request would never be sent.

Let me know if any of this needs further explanation!


Oh, I see now. I had assumed that because I couldn't get the response, that the request itself was blocked.

Thanks!


I would like to know one thing.

Who the hell thought it was a good idea to allow crossdomain XmlHttpRequests? Given that the standard say that post is for modification no other website should ever make thoes requsts.


The CORS standard for 'simple' POSTs is no different than what you can already submit via a form from a technological perspective. In that way, it actually makes a lot of sense.

And the whole point of CORS is that some websites do want to make those requests. ;-)


Researchers at UC Berkeley studied this exact question. About 40% of extensions contain at least one injection vulnerability. Some really popular extensions were vulnerable -- even a couple Google authored ones.

http://www.eecs.berkeley.edu/~afelt/extensionvulnerabilities...


From the same paper:

Starting with Chrome 18, extensions will be subject to a CSP that enforces some of these bans [13]. Our study partially motivated their decision to adopt the bans [1], although the policy that they adopted is slightly stricter than our recommendations. The mandatory policy in Chrome 18 will ban HTTP scripts in core extensions, inline scripts, and dynamic code generation. Due to technical limitations, they are not adopting a ban on adding HTTP scripts to HTTPS websites. The policy will remove all of the core extension vulnerabilities that we found. The only extensions that the policy will permanently break are the two extensions that rely on eval.

The paper is mistaken in that these changes are actually coming with the manifest_version=2 property that is optional starting with Chrome 18, not required. However, we will be slowly transitioning the ecosystem over to this version and will eventually require it.


There is an XSS on pen.io. I posted this a while ago, but feint didn't fix it.

http://xssdemo.pen.io

Suggestion: It is very hard to allow HTML but remove JavaScript. Write a method called something like isJSPresent() and then after you've done your filtering, check if JavaScript is on the page. If it is, return a HTML-encoded version of the page. Then, the security of your page will rely only on the correctness of that single method, and not on the correctness of your rewriter (which is much more complex).


Please don't use this for passwords. Security is very hard to get right.

Do they do a secure delete of the contents of the webpages? Who knows.

Do they have strong physical protection around the server? Who knows.

Do they run up to date software so the machine can't get taken over? Who knows.

Can you even trust them not to log all your passwords? Who knows.

This is an interesting service for some things, but I would never use it for sending passwords (or anything equally sensitive) back and forth.

Even if you let me "encrypt" the information before uploading it with a password, if this encryption is done in javascript sent by the server then as soon as the server is taken over you can't trust the encryption.


All of those objections boil down to not trusting a 3rd party service.

I wonder objections there are to running your own service of this type? This way you could guarantee the physical security, keep up with regular patches, manage your own logging, and securely delete the secrets to your satisfaction.

The only real objection I can think of is that writing software without security holes is hard. This applies to any security related software however, and the solution is to use 'proven' apps that have survived scrutiny. This type of app is pretty simple, which would ideally be relatively easy to audit.

In principle, a read-once URL that you can safely send via email seems to be a pretty efficient way of dealing with sending passwords or other keys without having to deal with GPG or similar. Just tell the client 'Click on this link, that's your password. This message will self destruct'. If it's intercepted, you can detect this, and change the password/revoke the key. I'm sure I'm missing something, but if not, it would be nice to have this become the standard way of distributing new passwords or keys for services rather than sending by email (for those services where you have an initial password generated for you).


Nothing is entirely secure. We're two guys with no ulterior motives that take all reasonable precautions to keep the data safe. For most people that's not only enough, it's much better than having their passwords stored in their email archives and chat logs.


Perhaps you could go into detail about some of the security precautions which have been taken?


If the link is sent unencrypted a potential email relay or packet-sniffer could scan for the links related to your website and open it before the recipient. It would be easy to automate at any level. They wouldn't have context, sure, but they'd have whatever it is you wanted to send and your recipient wouldn't.

I don't think this is likely to happen.

Is your delete permanent, if not secure?


All of their http traffic redirects to https.


I didn't mean in reference to their site, but how the end-user transmits the link to a recipient.


The idea is that would be detectable because the recipient would no longer be able to view the actual link and the password can be changed again.


You're basically saying "don't use the internet". People who are gonna use it know what they re getting into. Plus the use case (a password to an unknown username of an unknown service) doesn't sound that dangerous to me


Well, maybe this isn't for you. Given the number of people who use "password" or "1234" to protect their accounts, your very valid concerns don't necessarily seem like show-stoppers. I don't expect any of the things you list above to be true for most other web services, either.


I highly doubt the typical user with password "1234" is going to go through the trouble of using this service. They would most likely email the password directly ("that's secure right?")


You know what's funny? The company that I work for wrote a web app for a large company and we also host it for them. At one point in time, they requested a way for users to reset their passwords. We implemented it, but they never use it. They prefer to email me their passwords in plain-text. I think I've handled two of these types of emails today, alone!


So what are you doing obliging them?

"I'm sorry, but we recently reviewed our security practices, and we've found this method of communicating passwords to be incompatible with our dedication to protecting the confidentiality and integrity of your business data. Please use the the password reset form at .. "


Just reply them with "oh, we now have an automated service for that" and a link to password reset form. They'll learn, eventually.


No.

If an attacker gets access to the database, chances are very very good they can get access to the key as well. Very few services correctly separate privileges well enough that an attacker can not gain more access once they have database access.

However ... for the sake of argument, just pretend there's only a 50% chance the attacker can get access to the key. Would you rather there be a 50% chance that you leak EVERY password in the database (regardless of strength) or would you rather lose the people who pick passwords in the 5% because you've used bcrypt?


If you are storing something in database which needs to be decrypted (i.e., you are storing credentials to access some other system) it is common sense not to have that key in the database.

The key should be provided to your application on startup.

Or sometimes, the key can be in the wallet and that wallet can open only during startup of the application: application should prompt admin to enter password needed to open the wallet.

At least that how I design my systems... Maybe I'm wrong...


Prompt the admin for a password? So there's no way to do an unattended reboot of the application? That doesn't sound very practical.


Hey... If you want to be secure that what you need to do. The keys must not be easily accessible.

Anyway application which need to be secure are not the one you also would like to be restarted automatically or unattended: automatically restart can in many cases just make situation worse (i.e., data corruption)

You can always pay big bucks and use Oracle Database with Transparent Data Encryption: in that case, you need to enter password to open the wallet only on startup of a database.


We're required to have our sensitive data on an encrypted volume by a client (a major financial firm). Sometimes, calling the 24/7 techs to enter a passphrase is a perfectly appropriate solution, especially if you have enough capacity to let a server or two wait until the morning.


Fine. Assume it's one bit per voxel.

512 petabits = 2 petabytes.

So only 2 petabytes of data. I'd like to see the server that processed those 2 petabytes.

Notch also points out that even still, 8m is quite a small height. Take it to 16m and now you need 4 petabytes of space.


Nope, more like 2 petabytes plus 1 bit. Because almost all of that new top 8m is the same atom (empty air), if they are using an octree to store the voxels, then it would take 1 extra bit to store an empty top 8m. Obviously if there are things in the top 8m, it will take more space, but only for the things that are there.


That's not the case.

Each Telex Station has it's own private key which it uses to decrypt the 224 bit tag. If one Telex station loses its private key, then the user will just use another.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: