Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Mastercard Doesn't Use OAuth 2.0 (mastercard.com)
144 points by hitr on July 8, 2018 | hide | past | favorite | 52 comments


Related: here's a write-up from one of the OAuth 2 authors on the problems he sees in OAuth 2 and why he thinks OAuth 1 is better:

https://hueniverse.com/oauth-2-0-and-the-road-to-hell-8eec45...


It looks like they are concerned that OAuth 2.0 doesn't include a cryptographic signature of the request body, as seen in OAuth 1.0.

My understanding is that OAuth 2.0 dropped that signature in favour of requiring TLS to protect against tampering. I'd be interested to know why Mastercard doesn't consider this to be as good as the request body signatures in OAuth 1.0.


It's quite common for companies to MITM https requests (and install their personal certificate on all company owned machines so the browser raises no errors).

Some countries do so as well, for example Kazakhstan and China.


It's common to mitm TLS in banking sector in the US


And people from that country have those certificates installed? Voluntarily?


In case of countries, you don't need the certificate installed for MITM to work. You just need it if you want to get rid of the warning on every single https website. Unless you tunnel your traffic, it's visible.

In case of large corps, you get assigned a laptop / desktop setup by the company. You probably authenticate to the AD and don't even get the privileges to add/remove certificates.


Also, if the country has its own root CA, it can just sign arbitrary certificates. https://en.wikipedia.org/wiki/CNNIC#Fraudulent_certificates


Notice of course that that little stunt resulted in them being removed from everybody's trust stores. And it's not like you can just get away with it these days, since certificates are all publicly logged now.


> since certificates are all publicly logged now.

Only some of them are. All EV and some DV get published.


Didn't realize that; apparently all Symantec certs require it, and I misunderstood that as industry-wide.


Not everybody's. There is whole China where the certs remain installed.


And how is that accomplished? I doubt this will happen on private PCs.


Just a single data point but the last time I was in Beijing, my iPhone prompted me to install a certificate before I could hop on to the airport WiFi.

I just spent the next 3 hours of the layover without internet.


Uyghurs in China need to install mandatory tracking app to their mobile phones.


From a technical PoV, it feels like it is easier to argue after the fact "look, you sent this message, you signed it", v.s. "trust us, all comms were over TLS, we promise our logs are accurate and your token was not leaked".


An argument I've heard against TLS is that it's easy for clients to get wrong. In some cases, client code needs to directly check that the certificate matches the intended domain, and forgetting to do so makes TLS worthless because an attacker can just use any valid certificate. In other cases, certificate checking runs into some problem, and an inexperienced developer finds a "solution" on StackOverflow to just disable certificate checking, which, again, makes TLS worthless. In other cases, a client might make a valid TLS request to the wrong server (either by mistake or due to some other attack).

With OAuth 2, any of these problems will leak your bearer token, meaning than an attacker can act as you until the token expires.

With OAuth 1, you're typically going over TLS anyway, but even if an attacker knows the contents of all requests, they won't be able to act as you because they still won't be able to sign any future requests.

Edit: I just dug up the blog post I've read that describes most of the points I made above: https://hueniverse.com/oauth-bearer-tokens-are-a-terrible-id...


Having developed apps that use TLS in many languages, this is very true for most of them. I was pleasantly surprised by the Go TLS library - it gets all of this correct by default.


I'm not sure I understand the concern with integrity of OAuth 2.0 payloads. Sending the request over HTTPS already ensures that the request is not tampered with, and also guards against replay attacks.


No it can potentially ensure integrity between the a client and the first TLS hop that’s about it.

You don’t know which client it actually came from and you can’t ensure integrity within the transaction flow of your app.

Say the request terminated at a LB proxy then passed through and API gateway into an MQ then goes through multiple servers you need some form of integrity checking for the request and OAUTH 2.0 doesn’t provide it.


Wouldn't this be a reasonable reason if you consider that they might use additional equipment to terminate HTTPS connection in an early layer of their network?


no. You don't know where the TLS terminates.


Breaking IP (e.g. MITM the server) means you get a TLS certificate anyway. This might be easier done than said[1].

Breaking IP might not even be necessary because programmers are dumb[2].

[1]: https://www.bleepingcomputer.com/news/security/dns-poisoning...

[2]: http://web.archive.org/web/20120317165131/http://forum.devel...


They did this to provide message-level integrity. OAuth 2 switched to Transport-level confidentiality/Integrity.

It's worth noting that message-level integrity was not a design goal of OAuth 1; it is was a consequence of being based on OpenID 1/2, which were explicitly meant to run on HTTP without TLS so that they could be adopted by blogs. This was pre SNI, and pre cheap certs, so requiring HTTPS increased the hosting cost of a blog by an order of magnitude.

When the constraints changed such that requiring HTTPS was feasible, it greatly simplified OAuth. Some of these simplified proposals for OAuth became the input for OAuth 2 (where complexity was subsequently added back in the form of variants to support new use cases).

Relying on message level integrity in a protocol where such a thing was basically a side-effect of avoiding hosting costs would make me very nervous.

The clearest issue I can point to is that there is no response message integrity in MasterCard's system - an intermediary can block requests to MasterCard and give back fraudulent responses (yes, of course that payment went through!). This throws a ton of application-dependent security considerations into the system.


Using TLS makes it acceptable to send cleartext passwords. I don't know why, seems lazy.

So, I understand why Mastercard doesn't rely on that.


> TLS makes it acceptable to send cleartext passwords

What do you mean? There exists a NULL cipher, but it needs to be agreed on by both sides. If mastercard doesn't allow NULL, you can't send anything in cleartext. Or did you think of something else?


The problems are before and after the TLS tunnel.

I've seen a BigCorp load balancer / web firewall log the first 1KB of each HTTP POST body into a permanent archive. A typical login submission is much smaller than that. Also in some networks the TLS connection is terminated by a frontend server and backend communication is plaintext HTTP.

While these examples are obviously bad practice, having your requests signed and not leak user passwords would easily nullify their impact.


>What do you mean?

Login with user/password. Now the receiving end knows your plaintext password. It might get hashed, but you don't know when. Twitter I think had the latest failure with that, logging the password.

TLS is just that no MITM can see the data and that you can somewhat verify who you are connecting too.


I think https://aaronparecki.com/oauth-2-simplified/ explains that the cryptographic signature approach (if that's what they mean by "client secret") was discarded because mobile apps and single-page Javascript apps can't maintain the confidentiality of a secret anyway.

So maybe OAuth 1.0 is only better for apps running on a server?


I once asked a related question on StackOverflow

https://security.stackexchange.com/questions/161734/why-does...


Any app that takes security seriously will need to take a layered approach. So while Oauth 2, which is just a framework contrary to oauth 1.0a, seems to outsource its integrity protection to TLS this isnt enough: others have already pointed out that many companies hijack TLS at their edge proxies. Banks do this bu requirement of the regulator.

So you would need additional defenses against tampering such as OpenID Connect. In the banking apps that I have been working with we implemented additional symmetric encryption on top of the protocol (yes obfuscating the keys) and all other kinds of small things.

I’m glad mastercard does not rely solely on TLS.


So I guess the alternative would be to tunnel TLS inside TLS. So they can set up fake CAs to intercept the outer TLS, but not the inner TLS, satisfying both bank regulators and actual security. Until regulators catch on and we have to go around in circles again ...


It’s unfortunate that Big companies are pushing for OAuth 2.0 and trying to blindsided developers as if OAuth 2.0 is an upgrade to OAuth 1.0a. It is not! OAuth 1.0a provides authenticity, integrity, and non-repudiation. Something that OAuth 2.0 cannot match.


The problems reminds me of https://github.com/hueniverse/oz/ it's from one of the former oauth guys.


[flagged]


> This author doesn't seem to understand basic security.

This seems like an overly broad (not to mention hurtful) way to disagree with a technical assessment.

And really, I'd say that part of "understanding basic security" is understanding that there's value in multiple layers of security. OAuth1a+TLS provides two separate defenses against impersonation attacks, while OAuth2+TLS only provides one. There are many ways that TLS can fail in practice, and only OAuth1a stops impersonation attacks if that happens.


This[0] paper argues that "OAuth 2.0 is intrinsically vulnerable to App impersonation attack due to its provision of multiple authorization flows and token types."

[0] Application Impersonation: Problems of OAuth and API Design in Online Social Networks

http://cosn.acm.org/2014/files/cosn018s-huA.pdf


The paper says that there are ways for users to find out their own access tokens and then "impersonate the app". That might be a valid threat model once the machines take over, but for now my access token identifies me and not my app.


That makes sense in the bizarre backwards world where security implies security from the users.


They (quasi-implicitly) say that a design goal of their system is "message level security", and stress the desire for "non-repudiation", which is not provided by TLS per se.


TLS client certs are a thing


TLS certs (client or server) are only used during the handshake. After that it’s symmetric crypto, so no non-repudiation if that’s what they want. (Real non-repudiation requires a lot more than just signatures though, otherwise you could just claim that you lost control of your key).


Yes,the handshake is authenticated and can have two way non-repudiation. The symmetric crypto that follows the handshake uses secret key agreed upon during the handhake. because only the client and server know that secret,all messages encrypted with it also have non-repudiation.

I don't get your point about losing keys,is there any form of non-repudiation that does not require keeping secret material secret? Are you also neglecting the fact that revocation is a thing and TLS non-repudiation works by way of a trusted 3rd party?


Non-repudiation means that the party that sent a message cannot later deny having sent it. In TLS both sides negotiate a shared secret key(s) that they then use for fast symmetric crypto for the actual messages. Symmetric because both sides know the same keys.

This means that after the fact either side can claim that the other side fabricated messages that apparently came from them, because they could have done.

The point about losing keys is that if I can plausibly claim that I “lost” control of my key then I can claim that an imposter signed a message that apparently came from me. So even with digital signatures you usually need additional controls (hardware, processes, legal/regulatory etc) to really guarantee non-repudiation.


I now understand your first point.

> The point about losing keys is that if I can plausibly claim that I “lost” control of my key then I can claim that an imposter signed a message that apparently came from me. So even with digital signatures you usually need additional controls (hardware, processes, legal/regulatory etc) to really guarantee non-repudiation.

Depends on the use-case right? For example,foss projects use gpg signatures for non-repudiation and authentication,but they can also say "the key was compromised x weeks ago". I think there is only so much a communication protocol can do.

For where oauth2 would be used,I believe what some (like OP) want is session level authentication and non-repudiation. To say "I was really speaking to <other end>" as opposed to being able to say "Specific payloads and transactions with <other end> were really made with non-repudiability". For the latter,like you suggested, a protocol with awareness of the specific data,transactions and payloads is needed. Oauth2 and TLS are session aware not application aware.


Right. Most applications and protocols only need authentication. Non-repudiation is kind of an extreme security property, rarely needed outside of legal/financial transactions.


Can't they just also require that their implementation of OAuth2 also requires a signed payload?


Then it would no longer be oauth2? Standards exist to prevent deviations. How do you manage signing keys and distribute them? What signature scheme,encoding,etc... Should be used?


Playing devils advocate. Show me a half decent commercial API that doesn't specify any rules of engagement with their API. All the minutiae would be specified there.


What they are doing is no longer OAuth 1, since they require a hash of the body as a parameter for input into the OAuth 1 signature/MAC.


Sure. For example they could require you to sign the body of the HTTP request and put the signature in the HTTP "Authorization" header.


The Authorization header is where the authentication token goes. It's already used.


There is a (now expired) draft for sending signed HTTP requests using JWS that includes the access token as part of the signature data getting around this problem.

Edit: forgot the link https://tools.ietf.org/html/draft-ietf-oauth-signed-http-req...


Curious how many have moved from LDAP to using OAuth?

Would seem the future for enterprise will be OAuth.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: