Hacker News new | past | comments | ask | show | jobs | submit login

Personally, I would be happy to eliminate any possibility of inter-site forgeries. It's unlikely that my bank will be putting user-generated content in front of me any time soon (and if they do I presume that they'll sanitize it well enough to not be a problem).

It troubles me deeply to have CSRF declared a purely server-side application problem. The browser is quite literally my agent in all of my interaction with the web. It is an extension of me, and when it does things that pretend that they are me, that feels very wrong. That is why I propose new HTTP verbs: my browser should know (and verify) that when it sends out a SEC* request, that my eyeballs are on that data and my finger physically clicked that button, and it can do this if those requests are, essentially, tagged as particularly sensitive.

To place the onus soley on the server-side is for me to abrogate my responsibility to fully control my browser-as-agent. Frankly, even if the server successfully rejects forged attacks, it is not acceptable that my browser, acting as my trusted agent, attempted that attack in the first place.




There are 3 major browser codebases. There are hundreds of thousands of web applications, each with different security needs. I think it's lunacy to suggest that the browsers should take on this problem.

At any rate: there isn't going to be SECGET and SECPOST, so the academic argument over whether end-to-end is better than Apple, Mozilla, Google and Microsoft deciding amongst themselves how security is going to work for every web application is moot.


You are missing a critical point: users do not expect software that is under their control to do things that they did not tell it to do, using their credentials, acting as them. Even if all the server-side software in the world were to be secured against such attempts, there would still remain an underlying problem: loss of control of the browser.

While the vast majority of resource requests (both primary and secondary) are beneficial, some are not. The browser currently does not have enough information to make this distinction. New HTTP verbs would give the browser enough information to refuse to directly load damaging resources.


Again: request forgery isn't an intrinsically cross-domain problem. The HTTP protocol change you provided is already worse than the serverside solutions it proposes to supplant.

Serverside request forgery tokens don't rely on browser behavior to function. They provide a much simpler and more direct security model: to POST/PUT/DELETE to an endpoint, you must at least be able to read the contents of the token. This meshes with same-origin security.


The loss of user agent control is a serious problem independent of whether or not a malicious request is accepted. The fact that the user agent crafted and sent the malicious request at all is a problem worth solving. But for some reason you either seem to believe that it doesn't matter that the UA is acting maliciously on users' behalf, that this is an inevitable consequence of the way the internet works, or that it's such a difficult problem to fix that you'd rather ignore it and focus on the server-side. Or perhaps both.

Personally, I don't believe either of those things. Server authors should certainly take point on battling CSRF. But there is an important client-side piece to the puzzle that cannot be ignored. If users cannot even prevent their own browsers from attempting malicious actions on their behalf, then there is something critically wrong with browsers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: