Hacker News new | past | comments | ask | show | jobs | submit login
Android Chrome 99 expands Certificate Transparency, breaking all MitM dev tools (httptoolkit.tech)
288 points by pimterry on May 11, 2022 | hide | past | favorite | 183 comments



Enforcing CT is good, but that doesn't excuse the treatment of user-added CAs. On all platforms but Android, user-added CAs are considered particularly trustworthy. For example, Chrome Desktop, Firefox, and IE did not enforce HPKP if they encountered a cert from a user-added CA. Why does Android do the opposite? I don't see the threat model they are addressing.

We (mitmproxy) have repeatedly tried to get an answer to this from the Android folks (e.g. here: https://github.com/mitmproxy/mitmproxy/issues/2054#issuecomm...). It very much feels like they just want to kill uncomfortable privacy research.


> I don't see the threat model they are addressing.

The threat model they're addressing is the one where users have a small semblance of control over their devices and networks. I've been saying for years that HTTPS everywhere, DoH, eSNI (or it's successor), etc. are part of a long term plan for big tech to have absolute control over what users are allowed and not allowed to do with their own devices.

You can't see the threat model because, to them, you're the threat. From that GitHub issue:

> It does not prevent nor attempt to prevent you from doing those kinds of things.

The thing missing there is the for now qualifier. Once they know it won't impact anyone with the power to cause problems for Google, they'll remove the config flags and lock us all out. The same strategy has been used over and over and over in the last 10-15 years.


As a point of clarification, lest anyone think you are actually being serious...

The threat being addressed here is the proliferation of VPNs that also install a local trusted root[1], rogue roots being installed at border crossings[2], and entire countries mandating a MitM root to egress traffic[3].

I do believe they could have done a lot more to improve the developer experience, but this does address a legitimate security concern for millions of users.

1. https://www.techradar.com/news/new-research-reveals-surfshar...

2. https://www.vice.com/en/article/neayxd/anti-virus-companies-...

3. https://www.eff.org/deeplinks/2022/03/you-should-not-trust-r...


What gives me pause is that above examples are state actors. With no deep knowledge of the field, I don't know how much android Chrome can ever do to mitigate a sovereign state policy, especially as phone systems all have some local specificities introduced either through the carrier's software, or some straight exception to follow the country's regulation/culture.

[Edit: User introduced VPNs are another issue, but it then falls down on stopping a user from meddling with their phone, which is also tricky in my opinion]


What about your phone vendor injecting a CA to your phone so they can decrypt https traffic and inject ads into the webpage?

When they have that capability it’s not a big leap to other things.


Yes.

This kind of stuff was already happening at so many level. Before https everywhere I was seeing a phone carrier auto-proxying requests and injecting additional ads on the way back. I can’t imagine they just gave up on the revenue stream when pages switched to https.


These are legitimate concerns, but this is like beheading to treat a headache. Technically it works.

I'd suggest to add a notification dialog when a root cert is added, and a good, clear UI to manage the certs: when added, by what app, disable, remove, etc.


Yeah. Just like they do now: those are the certs. Install them. Very good for security.


As a point of clarification, lest someone take you seriously, clear warnings are what is needed. And smart users. Instead of raising awareness, google et al have been trying to hide https, parts of url, so that they can maintain control.

As someone who runs their own cloud top to bottom with custom CAs, adding a trusted root CA is a pain. Removing the ability for me to run ym CA takes away control of my own device from me and puts it in the hand of the big companies.

You should hard reset your phone when crossing boundaries. You would do the same if somone borrwed your clothes.

Would you lend some one your clothes with your passport and wallet in it? Then why is a phone any different.


It sounds like you are the type of person that should just be rolling their own browser anyway.

Like I said, there are a lot of things they could have done better here. But the threat is real and its not some tinfoil conspiracy by "big tech." It is our job as technologists to first and foremost do what we can to protect the 99.999% of users who do not run their own CA.


Running your own CA is a pretty common thing for companies to do, to manage internal SSL certificates. And telling systems to trust it IS a pain. Even on Desktop you can't just drop a file in a folder, because chrome and firefox don't trust the system CAs, so you have to configure those separately, and possibly other applications as well.

I don't think it is some big conspiracy, but it isn't a good situation.


I think there are other, more effective, mitigations in place against the threats you are describing. First, there's no automated way to install CAs on Android. Users must do this step manually (the articles you linked are about Windows). Second, if you have installed a user-added CA, you get a prominent and permanent notification – non-dismissable, reappears after reboot – that your network traffic may be monitored. All this stops the "secretly-added CA" threat.

Finally, the current implementation is not effective at protecting against country-level MITM. Attempts at country-level MITM have been thwarted by browser updates to blacklist the respective CA certs, the same can be done on Android.

I agree those are legitimate threats that need to be addressed, but there are better ways to do so, which don't come with the convenient side effect of killing privacy research.


> Users must do this step manually,

Good luck refusing if it is an edict from some governments.


I don't think the purpose of CT is to protect against anything on your local computer. It's to protect against CAs getting hacked or coerced by governments into issuing malicious certs.

CT's design really doesn't make sense if the goal is to protect against local malware. Why would it need public legers of Merkle trees containing every issued certificate if it was just to protect against local malware?

Anyways, local malware isn't in Chrome's threat model:

https://chromium.googlesource.com/chromium/src/+/master/docs...

Disclosure: Google employee


The purpose of CT is so that shenanigans have to be done out in the public view where they can be more easily detected. Depending on the logs you are using, an attacker may not be able to submit a self-signed certificate (I haven't looked at which logs Chrome is using these days).

Regardless of the documentation, Chrome does roll out changes to protect against local threats. I think they just don't want to be on the hook to address every local threat. Happy to give examples in private if you want to email me.

Disclosure: ex-Google employee


Wow your first link is very alarming. I was curious about this passage from that link however:

>"The installation of an additional root CA cert potentially undermines the security of all your software and communications. When you include a new trusted root certificate on your device, you enable the third-party to gather almost any piece of data transmitted to or from your device."

I understand how they could decrypt any communication between the VPN client and the VPN server but if I was already encrypting my data using a browser that wouldn't give them anything more than encrypted traffic. I do understand the overall threat of these companies installing a Root CA but is that particular passage a little disingenuous or am I missing something much more obvious?


> but if I was already encrypting my data using a browser

Encrypting against whose keys? The website you are visiting? The malicious VPN company?

The entire point of user-added root CAs is that they can place themselves between you and whoever you're communicating with and intercept/modify it all. And you're unlikely to be warned about it at all.


Encrypting with the public key of the site I'm visiting, example - google.com. A VPN provider that installed a Root CA without my knowing still wouldn't be able to read the traffic being encrypted with Google's public key. They could see the SNI and see I am visiting Google that's understood. Perhaps that's what the author meant in the passage I quoted above.


And how do you know you're actually encrypting against google.com's public key, and not somebody else's key?

A VPN provider is in the perfect position to MITM all of your traffic, swapping out any site's public keys with their own in real time. If your VPN app has installed an alternative Root CA on your device, you'll get no warning that this has happened.


My understanding was that for Chrome that the CA had to be in the Chrome root store. And that this is what is used over the OS level root store where the VPN providers would be installing theirs. Doesn't Mozilla also ship with its own preferred root store as well?

https://www.chromium.org/Home/chromium-security/root-ca-poli...


From that document:

"If you’re an enterprise managing trusted CAs for your organization, including locally installed enterprise CAs, the policies described in this document do not apply to your CA. No changes are currently planned for how enterprise administrators manage those CAs within Chrome. CAs that have been installed by the device owner or administrator into the operating system trust store are expected to continue to work as they do today."

In other words, locally installed certificates are normally treated as trusted by Chrome.


Thanks. I completely misunderstood that. That makes total sense for an enterprise use case too otherwise it would probably be non-starter for many corporate IT departments.


Just like a normal root CA. Who should i trust better ? Microsoft ? Google ? Facebook ? Nederland's root CA or Ghana's root CA ? I'm really sure Google only collects data to make better products for _me_.


The point is to use the injected root CA cert for TLS handshakes, then use the VPN to make sure that all traffic goes through a node that can mitm the TLS connections (and I guess they just get the unencrypted traffic for free).


You know that sounds reasonable until you realize you could solve this 'issue' by not trying to lock out users from root in the first place.

Give everyone root, sure they can mandate some mitm whatever at the border but it won't matter once you disable it with your root...

I think the post you are responding too is far more salient than these bogeymen you are inventing...

Sure, there are going to be issues with some folk installing spyware, but honestly not having root hasn't solved this issue.


People should have the right to choose their own "threat model". Google has a threat model. A computer user may have a threat model. It is absurd to assume they will always be the same. The interests of Google may conflict with the interests of the computer owner.

Google wants to choose the threat model for everyone. There is no opt-in or opt-out. One size does not fit all.

Would enjoy more details on how ESNI/ECH/whatever will be used to exert "absolute control". SNI is certainly being used for censorship, but would like to know how ESNI can be used in similarly malicious ways.

Using DoH run by a third party is optional, as is using traditional DNS from a third party. One can still utilise these protocols with localhost servers. Computer owners have ample storage space today for storing DNS data. I use locally stored DNS data. I put the domain to IP address information in map files and load them into the memory of a localhost proxy.

Public scans from Rapid7 used to be a good public source of bulk DNS data in addition to public zone file access through ICANN, e.g., czds.icann.org. (A variety of third party DoH servers can be use to retrieve bulk DNS data as well.) Alas, Rapid7 have recently decided not to share the DNS data they collect with the public anymore.


> People should have the right to choose their own "threat model". Google has a threat model. A computer user may have a threat model. It is absurd to assume they will always be the same. The interests of Google may conflict with the interests of the computer owner.

I think that you are correct.

However, there is also such things as dynamic IP addresses, which might also have to be considered if you want to store DNS data entirely locally.


I have been using local DNS data for over a decade. At the start I believed most DNS data was truly dynamic. Today, I believe that is false; I have the historical DNS data to prove it. It is actually only a small minority of websites I visit that are changing hosting providers frequently or perhaps periodically switching between a selection of hosting providers. I do not mind making occassional manual changes for that small minority as I want to know if a website is changing its hosting. There are legit reasons to keep changing IP address but there are illegitimate ones, too. If I am lazy and do not want to look at the details when something changes, I can just redirect requests to archive.org or something similar, or a search engine cache. This works surprisingly well.

I once had someone challenge me on HN arguing that the IP address for HN was dynamic, with no proof. However I know it rarely changes because I have the DNS data stored locally and I have not changed it in years. It is baffling to me why some people refuse to accept that most DNS data can be, and in fact is, relatively static. It is too easy to test. Perhaps those who like to use DNS for load balancing do not appreciate the idea of the end user making the choice of which working IP address to use. However, they can, and in my case, they do.


The place I work uses AWS EC2 instances for everything. They get created and destroyed fairly frequently, and change public IP addresses as a result.

I wish this wasn't the case, because this includes all the things I need to access through the VPN, so several times per week I have to go rerun the "DNS lookup this list of domains and static route the resulting IP addresses through the VPN" script again.


"They get created and destroyed fairly frequently, and change public IP addresses as a result."

That's half the story. A load balancer (static IP) will often offload the traffic to another IP. Dns is not doing much for you here.

Furthermore, DNS often has a significant lag time between changes - switchovers usually measure in days, relying on dns to cover your routing is usually only pratical with a custom dns resolver anyways.

Even in the case of websites with truly dynamic access like this, then, it's enough to run a targeted query from your local resolver - an argument for local resolvers over your custom-roll-a-script solution...


> The threat model they're addressing is the one where users have a small semblance of control over their devices and networks. I've been saying for years that HTTPS everywhere, DoH, eSNI (or it's successor), etc. are part of a long term plan for big tech to have absolute control over what users are allowed and not allowed to do with their own devices.

I had thought so too, and also "secure contexts" (apparently there is supposed to be a way to configure this, but it does not seem to be the case) and HSTS, too. I think that HSTS is very bad. (HTTPS is not bad, but all of these things that are forcing them is a bad idea.)


IMO the downside of HSTS you’re referring to only applies to the specific HSTS-only TLDs. Otherwise, HSTS is useful to prevent a downgrade attack where someone types in http://bank.com on a new computer and poisoned DNS (via a rogue network operator or hacked router) means the initial page load for the bank loads over HTTP and thus the IP returned can show a fake login page, even to the point of the browser auto-filling login credentials.


> initial page load for the bank loads over HTTP and thus the IP returned can show a fake login page, even to the point of the browser auto-filling login credentials.

That problem would correctly be solved in a different way. "http://example.com/" is different from "https://example.com/" and so would have different cookies, auto-fill, etc. An indicator can be used in the location bar or status bar if needing to indicate the protocol and security clearly. The browser also should not auto-fill anything without the user's permission, regardless of protocol and TLS.


>different cookies

That protection essentially already exists with the secure bit in cookies and the __Secure- prefix.

The problem is that http:// links exist all over the web, and a website owner cannot force all incoming links to say https:// instead. Without HSTS, any time a user clicks one of those links, the user's ISP will be able to see the exact URL the user is visiting, and might even inject ads or other stuff[1] into the page. HSTS solves that.

Disclosure: Google employee, and my license plate is HSTS

[1] https://en.wikipedia.org/wiki/Great_Cannon


While it is true, I should think that it should be better to be up to the end user to decide what they want, and that it should not assume that everyone else knows better instead. If the user writes "http" then it is http, if the user writes "https" then it is https, etc. (What it should do if the suer does not specify the scheme is a different question, and different users may have different preferences (which should probably be configurable). My preference is that it would treat the URL as relative instead of absolute in that case, but that is probably minority.)


Most users have no idea what any of this means. Even I often don't know what one is best. I want to use https:// if it's available, and http:// otherwise. Should every time I want to click a link, I instead right click to copy the link location, then paste it into the URL bar and modify it to https:// to try that, and then if it fails to load, change it back to http:// ? That's a massive amount of work. I want it to just work without all that hassle and having to keep a info in my mind of which websites support https:// and which don't.

>If the user writes "http" then it is http, if the user writes "https" then it is https

Should we also block the website from doing a redirect to https:// ? HSTS is basically just a redirect cache.


> Most users have no idea what any of this means.

OK, but users who do know what it means should be allowed to configure it differently. The computer software should be designed for advanced users, who do know what it means (and includes documentation in case you do not know).

> I want to use https:// if it's available, and http:// otherwise. Should every time I want to click a link, I instead right click to copy the link location, then paste it into the URL bar and modify it to https:// to try that, and then if it fails to load, change it back to http:// ?

No; it should be configurable to do it automatically how you want to do.

> I want it to just work without all that hassle and having to keep a info in my mind of which websites support https:// and which don't.

There are also other ways to do that even without HSTS, though. Still, it should be configurable by the end user.

> Should we also block the website from doing a redirect to https:// ?

No, that it isn't up to the client side to block. (I think that most web sites should not automatically redirect in this way, but that is not related to the client software.)

> HSTS is basically just a redirect cache.

Then it is deficient since it is not the only kind of redirect. Furthermore, it is bad because it does so without the user's specifying if they want this cache or if the user wishes to override it for any reason, making some things difficult to do. It should not try to think they know better than the end user what the end user is wanting to do.


>OK, but users who do know what it means should be allowed to configure it differently.

Yeah, I agree. In order to avoid bloat, it might be best to offload this to extensions.

>No; it should be configurable to do it automatically how you want to do.

I agree, but I think complex configuration should be offloaded to extensions. By default it should do what the website owner wants (HSTS).

>Then it is deficient since it is not the only kind of redirect.

Other kinds of redirects aren't relevant for security, so they don't need as the degree of cache guarantees that HSTS provides.

>Furthermore, it is bad because it does so without the user's specifying if they want this cache

Stuff should be secure by default. We don't want security to be opt in.

>or if the user wishes to override it for any reason, making some things difficult to do.

I agree there should be overrides.

>It should not try to think they know better than the end user what the end user is wanting to do.

Sometimes the end user actually doesn't know what the end user is trying to do and gets phished. I agree there should be overrides though, but they need to be carefully designed so that attackers can't abuse them.


> In order to avoid bloat, it might be best to offload this to extensions.

While it is a good idea to avoid bloat, there are some problems with this:

- The web browser is too bloated already, and its features are not offloaded to extensions. (If they were (offloaded to extensions which are then included with the browser by default), then it would be easier to customize those features, and the extension mechanism would be sufficient to add new HTML commands, file formats, protocols, character encodings, etc too.)

- WebExtensions is incapable of many things. (I don't know if it can affect the location bar behaviour, but XPCOM is capable and is what I have done on my computer. One thing that WebExtensions definitely does not do is to load native .so files natively. Of course native code should not be available in the public extension catalog, but would be useful for advanced users who can add it by themself.)

> I agree, but I think complex configuration should be offloaded to extensions. By default it should do what the website owner wants (HSTS).

I think it should be unnecessary. If you have a header overriding option, then it makes many other settings unnecessary, and the user can then make settings for cookies, languages, HSTS, referer, JavaScripts and other features (using CSP), user agent override, and many other things, without needing separate settings for those things.

> I agree there should be overrides.

Yes, but unfortunately the HSTS specification, and web browser authors, does not want.

> Sometimes the end user actually doesn't know what the end user is trying to do and gets phished. I agree there should be overrides though, but they need to be carefully designed so that attackers can't abuse them.

I think differently. It might need to be a separate program, which is the "advanced users web browser", you can override and set everything. I don't like this modern software that is not designed for advanced users. Software should be designed for advanced users.

One idea is that a better implementation can be a web browser engine which consists of a lot of independent components (HTTP, HTML, CSS, PNG, JavaScript, WebAssembly, key/mouse events, ARIA, etc; available as separate .so files perhaps (with their own source code repositories)) that can then be tied together by a C code (the main program); a programmer can modify or rewrite some or all parts of this code, to make a customized web browser with your own functions changed.


>and its features are not offloaded to extensions.

I think commonly used features should be built in, and rarely used features or features where people can't agree on how the UI should look, should be in extensions. I think HSTS customization would be a rarely used feature, so should be in an extension.

I think I agree that most of what you're saying would be ideal. I'm not sure how much of it is doable though when you consider programmer time constraints.


> I think commonly used features should be built in, and rarely used features or features where people can't agree on how the UI should look, should be in extensions.

Unfortunately doing it makes it difficult to work. Also, the extension mechanism is deficient.

If multiple kinds of web browsers are made, so that it is not a monopoly, then they do not all have to be made the same way.

They shouldn't all be Chromium or whatever; they can be something else. If you have separate components, then you can more easily replace them or tie them together differently, without having to use inefficient extensions, modify the source code (which is large and might take a long time and be complicated), waste disk space and memory on unused features, etc.


Yeah. Although, as you mention modifying the source might take a long time, similarly implementing multiple browsers would take a long time. I don't know what the solution is.


> Yeah. Although, as you mention modifying the source might take a long time, similarly implementing multiple browsers would take a long time. I don't know what the solution is.

That is why I think that separating out the components might make it easier for other people to independently build multiple browsers.


> Most users have no idea what any of this means.

That's a bad reason to do things. Users will learn what is made important to them. Red website for http, green website for https. Nice big lettering to translate to the non technical user 'Protected' mode versus 'Unsafe' mode.

Yeah for power users show all the nitty gritties, but this isn't about being technical or not - it's not being communicated properly and instead its just hidden.

When your spouse doesn't communicate well with you, starts having an affair, and then hides it, _thats NOT a good thing_.

All this talk of pasting urls...thats not how users use the web. Nobody is keeping in mind which websites support https or not. They do care what happens when they browse.


I want to add a user ÇA but only for a certain sites. If I’m developing on blah.com, I’m happy to add a root certificate, but I want to choose what sites (blah.com, blah.de, etc) I trust that very to accept, I don’t want it doing bank.com because I don’t trust my own CA management to be infallible.

Name constraints are poorly supported, and even then that again relies on me creating the root cert correctly. Browsers seem to give me the user the choice of either accepting the cert fully or not accepting it, not giving me the control. Why is that.


Yes! Thank you! So many use cases!


> For example, Chrome Desktop, Firefox, and IE did not enforce HPKP if they encountered a cert from a user-added CA. Why does Android do the opposite?

Your examples are all browsers. I understood that Chrome on Android will continue to support using a user-added CA added to the user store. Android and desktops behave exactly the same for web browsers.

Non-browser apps are where the differences exist. On Android you must opt-in each app to trust the user store. I'd imagine that the next step is automating https://github.com/shroudedcode/apk-mitm to bulk replace all installed apps with modified apks.


Is this tool still working? I tried multiple small apks and they all errored out. Looking at the issues, there are quite a lot similar reports.


At least one country that I heard of (Kazachstan) requires everyone to install a cert issued by the gouvernement that enables them to do this kind of MITM spying on users. That could be the argument against allowing user-installed certs.


All three times (2015, 2019, 2020) the Kazakh government tried this the cert was blocked in major browsers.


So all this is going to do is break the Internet for a whole country. Or at least those that use Chrome and Android.


If Android + Chrome is the majority of computer users in the country (which it probably is), this might force the hand of the government


i doubt that a technological feature can be used to force the hand of a gov't.


That quote about the Australian politician who didn't understand that Australian law can't trump the natural laws of mathematics comes to mind.


He was Prime Minister, which says a lot.


You're right on the money, and they don't want people reversing APIs, either.


who doesn't want you reversing their apis, Google? Oh ..


>For example, Chrome Desktop, Firefox, and IE did not enforce HPKP if they encountered a cert from a user-added CA.

They have no teeth on these platforms (as of now). Contrary to this, Android has been engineered from the ground up to make it's inner workings as opaque and unflexible as possible.


>For example, Chrome Desktop, Firefox, and IE did not enforce HPKP if they encountered a cert from a user-added CA. Why does Android do the opposite? I don't see the threat model they are addressing.

From what I understand in the post, desktop Chrome and Android Chrome are the same. If you add the user-added CA to the user store, it doesn't need CT. If you add the user-added CA to the system store, it does need CT. On desktop, everyone adds them to the user store. On Android, most reverse engineers added them to the system store, which causes the problem.

> I don't see the threat model they are addressing.

The idea is that if it's in the system store, Chrome thinks it must be a real CA and thus needs to follow real CA rules, whereas if it's in the user store, it's a weird CA that the user likes and wants to be treated specially, so the CA doesn't need to follow the rules.

To focus more in on threats: a system CA is a big juicy target for hackers or malicious governments, because it can issue a cert that any device will trust, so it needs lots of oversight, like CT. As opposed to a user CA, which is something tiny, that likely only this specific user trusts, so it's not a big target for hackers, and thus doesn't need to follow CT. This whole thing breaks down if the user adds a cert into the system store, because then Chrome thinks it's a big target and needs a lot of oversight, when actually it's a tiny target and doesn't.

Disclosure: Google employee


Note that this only counts for certificates in the system store. As far as I know, certificates stores in the user store (the one you use when you import a certificate through the UI) will override this requirement and work just fine.

The underlying problem is that apps stopped trusting user certificates by default in Android 7 so security researchers have had to root their devices and store certs in the system store.

Theoretically this should work if you can manage to get the certificate in both the system and the user store, though I don't think you can do that.

I'm thinking something like this: you add the root certificate to your system store so most applications will trust it; then you create an intermediate certificate authority for your MitM-ing (which you should probably do anyway if you're doing this long term) and import that certificate into the user store.

Hopefully, that way Chrome will see the user store intermediate certificate and validate it using the non-CT algorithm. I haven't tried it, though!

Note that for MitM-ing Firefox, you need to access the secret dev settings (go to about, hit the Firefox logo seven times to enable them) and enable loading user store certificates.


> The underlying problem is that apps stopped trusting user certificates by default in Android 7

The other casualty were the people who use their own CAs. It took me some time to grok what the problem with some apps (which didn't want to connect) wasn't in my server configuration, but the certs they served.

TLS-ALPN somewhat alleviates this problem, but still requires to have some real presence on the net.


This is the real answer. While I of course support hardening security I have to think that Google has other motivations by introducing these restrictions on the system cert store. There are legitimate use cases where you want want to MITM yourself (or your employer). This combined with cert pinning combined with use of encrypted DNS (which is definitely not a bad thing in it of itself) means that Google is going to keep having access to useful tracking data.


Agreed. This is just another prevention measure to line their pockets


wasn't chrome going to add it's own certificate store? whatever happened to that?


If you've ever tried to use 802.1x / EAP on Android then you've already had a taste of this issue. Android makes importing and trusting a new root certificate authority very difficult. At least in my experience the device constantly pops up warnings about the root ca, despite using the appropriate import options. And if you're paranoid enough to use wifi client authentication, then you probably don't want anyone else to issue certs for your devices.

On one hand it's commendable that Google makes it hard on malicious actors, but on the other there are legitimate use cases for importing your own root CAs and using something stronger than WEP is just one of them.


> If you've ever tried to use 802.1x / EAP on Android then you've already had a taste of this issue. Android makes importing and trusting a new root certificate authority very difficult.

Good lord don't get me started. In worked at a NOC in college part-time and a significant part of my job was helping users onboard devices to the network and determining where there were gaps in our onboarding process. After a random security update, all Pixel phones just ... stopped being able to connect to our network and we eventually determined that you needed to go through a new, non-obvious path to import a CA for our EAP-TLS certificates. I'm not remembering the details but unless you connected _exactly_ right on the first try, it would delete the CA certificate and you'd have to start over. There's a lot of other details here but it ended up taking us almost a month to find the exact path to get things up and running.

We also paid a vendor (SecureW2) for an app to enroll devices on the network, but Google removed the ability for users to edit configurations generated by apps. Our network required disabling MAC randomization at the time (which Google provides no API for apps to disable). Before the change, users would enroll, and then disable MAC randomization to complete setup. However, because users were no longer allowed to edit these configurations after the app had gotten everything else configured, they were left dead in the water.

On the flip-side: Apple makes this very easy and provides a "profile" mechanism that users can download and get everything set up in a few clicks.


> Our network required disabling MAC randomization at the time (which Google provides no API for apps to disable). Before the change, users would enroll, and then disable MAC randomization to complete setup.

Yeah, that one I kinda agree with. I don't want most apps to be able to disable that. It's one of the first lines of defense in protecting a device's privacy, and I doubt most people would understand the potential impact if any app could even ask to disable it. That one can be left in the main settings.


Yeah, I totally understand why Google did that (though, it's a WiFi provisioning app... it's literally an app that connects you to a previously untrusted WiFi network. I get you gotta draw the line somewhere but that's debatably just as much of a privacy risk). That said, if you decide that it's not allowed to be changed by an app, you gotta let the user be able to make the change...


This is seriously such a joke because for years Android had a “don’t check certificate” option for 802.1X. It was unique among the most popular client OSes in allowing this extremely unsafe configuration, which is way worse than “trust on first use” (the least secure option in iOS). And now suddenly a total 180, they are extremely anal about certificates to the point that 802.1X is very hard to use. Thanks a lot gang.


"Trust on first use" is not always less secure than using a CA.

With trust on first use, if you validate that the certificate matches the one you expect, then you're good as long as the server and your device are not compromised.

If you go the standard route and use a certificate authority, then a compromise (due to law enforcement or not) of the certificate authority will cause your device to silently trust a third party MITM certificate.

A lot of hidden implicit trafeoffs like this become apparent once you realize that your personal threat model is only loosely aligned with Google's.


It's worse. A compromise of any certificate authority will do this.


We control the CA so this method of attack is not possible.

That said, TOFU is only less secure in practice, not in theory. The "in practice" is because users do not actually compare the cert with anything. They will always just click "Trust."


My university used to recommend explicitly selecting "do not validate" for the longest time, even though validation through the system store (or a certificate of your choice) has existed for years.

For what it's worth, if you're using a valid TLS certificate for your 802.1x setup then you don't need to load any certificates anymore (at least not since Android 10 or 11). Users may need to enter a domain to validate, though; I don't know the specifics of these protocols.


Not every modern Android device offers the “use system CA store” option. And chromebooks don’t. We just gave up and kept loading the certificate.

You also have to understand that “valid” doesn’t really mean anything. When android supports the system CA store, they’re the first and only OS to do it this way. It doesn’t make a ton of sense to try to do this because there is no domain to validate. Unless it’s preconfigured, in which case you may as well have loaded a root CA.

That’s what we’re doing now (preconfiguring the root CA). Then the server sends a valid cert and trust chain, just not within the typical global PKI infrastructure.


From what I'm reading about this online the domain validation thing is part of the WPA3 spec, though it's clearly more visible on Android. Perhaps they removed their WPA2 code path and stuck to WPA3 exclusively but I think this is a way forward rather than a problem; it's too easy to accidentally import a root certificate authority into the trust store when you're trying to get the WiFi going and that's a security risk. The stupid warnings should still disappear when you import the CA as an EAP certificate of course. I'm pretty sure most modern operating systems will (some day soon) connect to enterprise networks configured the way Android likes it without ever needing to install a certificate, which is an obvious benefit to me. The domain itself is either the entered domain or the domain of the identity you entered, I believe this is also based on some part of WPA3.

Validating the common name through the system CA store is also an option on Linux, though you have to select the system certificate store manually instead of specifying a PEM file that you can never move again.

I don't know about Chromebooks but I think the system CA validation setting is standard in Android since either Android 10 or Android 11. Android 11 added validation of the certificate (presumably through OCSP stapling?) but that's disabled by default. If you're not on Android 10+ I'm not sure if I'd call that a "modern" Android version anymore with how quickly manufacturers drop support for older Android versions. I'm pretty sure Google already dropped security support for Android 9 anyway.

It's possible that some manufacturers broke the setting, but if they did they should've added their own replacement. You can't blame Google for broken Android forks imo.


> It's possible that some manufacturers broke the setting

Google is one of these manufacturers. I just checked an up-to-date Pixel 6 and it did not have this option.


It's unfortunate that the SSID can't be automatically used as the domain on the TLS certificate.


Unfortunately this wouldn’t help us because we are an eduroam identity provider. The SSID is always eduroam but the cert presented is different, based on your username, because your login is handled by your home institution (the IdP responsible for your identity).


But I believe Android does care about the domain an Eduroam user said their user is in. So, if your user says I'm example@mit.edu I think it will expect the certificate from the 802.1X server (at MIT) to have a certificate for mit.edu, which is what will happen in Eduroam.

The certificates used are PKIX certificates, they say they're for TLS Server Authentication (which they technically are) and the subjects are DNS server names (these are, after all, servers on the Internet) and so realistically the only PKI exercising any oversight over such certificates so that it could Just Work™ which is what your users want, is the Web PKI.

So this actually makes sense?


Yeah, eduroam is a bit more special in that aspect. I can imagine it working in quite a few other cases though.


802.1x certificate checking is a big mess because it is totally unclear what the certificate should certify.

The WiFi alliance should have specified a domain name instead of an SSID, then you could just have checked the certificate for that.


Android's networking team has historically been very opinionated and unwilling to support use cases they don't consider to be the right way. It and ChromeOS are still the only things actively refusing DHCPv6 support for example.


Unlike with a lot of what they do, that resistance I appreciate. I don't think it's the right way either. I've seen firsthand how it's a crutch that lulls some network admins into thinking they can just think about v6 as it were v4.


All this achieves is that for people who want to subnet /64 it is not possible on Android without manual setting of the address. (and only on android) Yeah and android doesn't support manual setting of the IPv6 address in the UI, lol.

What an OS.


On the other hand, it also prevents ISPs from subnetting /64. There are probably some who would give you a /120 if it worked, just to be dicks.


It does not. Competition prevents that. This just makes control over one's network harder in IPv6 case.


Haha, competition between ISPs. That's a good one.


Well, some of Android. Huawei Honor phone happily accepts DHCPv6 issued addresses.


I've maintained and deployed dozens of IPv6 networks but I'm still not quite sure I follow what you mean. The Android teams reasoning was they didn't want networks to be able to disallow multiple addresses (particularly carriers, not Wi-Fi) but that doesn't really have anything to do with treating it like v4 being wrong.


Exactly; DHCPv6 would allow to force the device to use just a single IP address. In IPv6, you need extra IP addresses for tethering; XLAT464 needs entire /96 prefix. Allowing DHCPv6 would allow network admins cripple functionality like this. It's not like IP addresses even in the smallest subnet (/64) are a scarce resource.

For carriers, it is not necessary at all. LTE networks use different mechanism for communicating assigned IP addresses.


DHCPv6 forces no such thing. 464XLAT can still be done via PD though it's really again a carrier thing, enterprises will know if they need 464XLAT the device doesn't need to force that it would be possible. Same story with tethering.

LTE w/ PDN still uses DHCPv6 PD or SLAAC (or both) https://i.imgur.com/2dKAw5W.png


> DHCPv6 forces no such thing it just allows for it.

I wrote "would allow to force", not that it forces.

> LTE w/ PDN still uses DHCPv6 PD or SLAAC (or both).

Unless you are turning Android device into router, you won't need PD support there. What else you need prefix delegation for?


> I wrote "would allow to force", not that it forces.

Ah sorry, must have misread my bad. All the same I'm not seeing how this plays into why it should be forced, the phone is just half the story (the CLAT) and if the enterprise wanted to implement 464XLAT they can PD and run a NAT64 gateway. If they don't then it's not going to work just because the device has enough addresses to be a CLAT as there is no NAT64 gateway. Or if they just want their devices to dual stack or single stack v6 instead why does Android get to decide they need to support something they aren't using?

> Unless you are turning Android device into router, you won't need PD support there. What else you need prefix delegation for?

The prefix for 464XLAT or to allow tethering. DHCPv6-PD is how you do this when you're not using SLAAC.


> All the same I'm not seeing how this plays into why it should be forced, the phone is just half the story (the CLAT) and if the enterprise wanted to implement NAT64 and 464XLAT they can PD. If they don't then it's not going to work just because the device has enough addresses to be a CLAT.

You are right. I've used that as an example of why a device would want more IP addresses than one. One of aspects of treating IPv6-as-IPv4 is the assumption, that one IP is enough. I get it, it simplifies logging/reverse resolution for example, but one IP might be not enough and the example, while not be the best, illustrates the point.

> The prefix for 464XLAT or to allow tethering. DHCPv6-PD is how you do this when you're not using SLAAC.

There is a difference in scope between PD and SLAAC:

With PD you usually hand out /64 (or bigger) carved out of whatever larger subnet you have. So if your upstream gets you /48 or /56, this is the mechanism to hand out smaller subnets to routers downstream.

SLAAC support allows any device in the subnet to claim any address inside the announced prefix it wants, unless it is already taken and other device objects. There is no limit on how many addresses it can claim, but claiming /64 or more would take a while :). Claiming a few is enough for tethering to work.

SLAAC is what you get wit RA by default; only those that want to force DHCPv6 on their network disable it (yes, I tried that once when playing with it. That playing was great way to understand why it was a bad idea).

With DHCPv6, even if it is supported in the network, doesn't mean you also get PD. It remains completely optional. This is a problem with many CPEs (i.e. with a class of devices it was designed for!), that are supposed to support PD, but the support is either buggy or non-existent, thus their users never have more than /64.


If enterprises have use cases then unless there is a good solid alternative example that shows it's universally the case one should always have many:1 it just doesn't seem like there is anything beyond "I think this is the better way therefore I won't let you do the other" backing the choice. If an enterprise wants 1:1 mapping of their devices using DHCPv6 for centralized logging, tracking, monitoring, avoiding device specific RA bugs/implementations/limitations, to consolidate the function while dual stacked, certain DHCP options to be available, or any other reason they have then someone else's non-applicable use case for many:1 isn't reasoning for nullification of those cases. In general I'm against any approach from a product/service that takes the stance "we just know better" and doesn't allow the operator to say otherwise even when I agree it is actually the superior way in most cases.

Yeah PD will give you a much an overkill block but there isn't really much of a block shortage. More importantly the alternative is NAT44 on the CLAT so only a single V6 address is required, this is actually what the standard says implementations "SHOULD" do in the single IP scenario as after all 464XLAT doesn't support peer to peer or inbound anyways. And of course people and orgs are free to simply want DHCPv6 on their devices even if it isn't perfect for every scenario.

A "Android doesn't want to support NAT44 for 464XLAT, assign a prefix the the device for the 464xLAT use case" stance I could totally understand. Or even "Carriers are not asking Android to support DHCPv6 deployment methods on LTE, DHCPv6 will only be supported on wireless" is very understandable. The current stance isn't like these though, it's simply "we don't like that type of deployment so we don't allow it". Or maybe I've just missed something big and there is a reason I should have forced a couple customers away from their use cases.


The constant popups have been removed almost eight years ago, right now you get a little (i) in the notification shade that says "this network may be monitored". This is the same warning that shows when you enable a VPN.

The warning is wrong when it comes to EAP certificates, of course, but far better than the constant unnecessary popups warning you that you did in fact load a CA cert.


Yeah anything that requires acking or creates distrust is a no go. I really want to transparently move all devices that support it to certificate-based authentication without training my wife to ignore or ack warnings. My fantasy is that IOT devices add support for it too, but that's just crazy talk.


> My fantasy is that IOT devices add support for it too, but that's just crazy talk.

Maybe not. Recently I played with some relays that support MQTT-over-TLS (Shelly 2nd-gen ones). What I liked was, that they came with empty CA store, it was up to the user to provide all the certs.


I've always wondered why OSes don't have a specific CA store just for 802.1x.


Android kind of does. You can mark a certificate for "VPN and Apps" or for "Wifi". I can't remember if you can do it with CAs, but you can definitely do it with the dependent certs.


If you need something stronger than WEP you have WPA2 and WPA3.


Yeah, you're right. I should have included those, but I'm really talking about per-client certificate authentication vs a shared secret.


I always find it highly ironic that I can trivially MitM my non-jailbroken iPhone to inspect app traffic (unless the app uses cert pinning), but MitM'ing on a non-jailbroken Android phone is a huge pain in the ass, basically impossible without patching binaries (please correct me if I'm wrong).


Loading certificates into the phone is as easy as opening the files. The problem here is that Android apps have to opt in to loading user-imported certificates.

Chrome and many other browsers will load these certificates just fine if you install them the official way. Apps that specify they trust the user store will also load them without any issues.

The method that's now broken fails because the author is using a workaround: with root permissions, the system store can be altered, which apps do trust by default. Chrome, however, is following best practices and enforces that certificates are logged in the certificate transparency log. This isn't done for user-imported certificates for obvious reasons, but it's applied to system certificates to prevent rogue CAs from faking certificates without exposing themselves to the world.

This means the workaround no longer works, or at least not as easily. There are still workarounds to fix the workaround, like the flags the author suggests here. It was never a supported way of doing things and unsupported workarounds are bound to break at some point.

I don't know how iOS deals with certificates, I suspect it's something sensible when the normal API is used (opt-out of user certificates, that is). However, apps like social media and messengers will often include certificate pinning that is impossible to get around without jailbreaks + modifying runtime code through tools like Frida. They include the hash of their (intermediary) certificates in the application itself and validate that the chain is signed by a valid certificate with that specific hash. That way, a malicious certificate authority can give out a "valid" certificate that's useless for MitM-ing your app's users!


Frankly, it doesn't matter why exactly they've broken it. What matters is that it is broken, with no easy way to intercept traffic for all apps.

This is an extremely user hostile position from Android and Google which is clearly meant to remove oversight over what apps send from the hands of the computer owner. I have no doubt they'll continue this cat-and-mouse game of trying to make it impossible to see the traffic generated by your own device.

Now this is something the EU should work on changing instead of trying to dismantle E2E encryption.


This thread is simply about a user-visible warning screen in Chrome. It has nothing to do with apps, etc. And the warning looks like it's skippable, since it has an "Advanced…" option. Not sure how this is supposed to impact dev workflows or be user-hostile in any way.


It does have to do with the overall topic of traffic inspectability.

As explained above, a relatively recent change in Android makes applications not trust user store certificates by default, except if an application explicitly opt into that. ~None of them do, except Chrome.

The solution to that problem was to install the certificate into the system store. But now Chrome considers all system store certificates to be public ones and requires CT for them.

So now there's no way to install a certificate to be able to inspect traffic from both Chrome and other applications at the same time. (If a certificate is in both the system store and in the user store, the system store version takes precedence, so Chrome would still require CT.)

There's a Chromium bug the author of the article filed to document this regression and you can already see a Chromium dev argue that "reverse engineering" (i.e. the ability to inspect the traffic your own device produces) is "understandably" not an addressed scenario: https://bugs.chromium.org/p/chromium/issues/detail?id=132430...

To be clear, this particular change isn't the end of the world, but none of them are since they're just using the slow frog-boiling method. Each change makes it a little bit harder until eventually it won't be possible at all.


> The problem here is that Android apps have to opt in to loading user-imported certificates.

Yes, but since Android N introduced this change, I haven’t met a single non-browser app that opted in to trusting the user store, or offered an option to do that. Maybe some enterprise apps do that? So it’s practically broken for any non-browser app; as for browsers I’ll just use a desktop one...


Ah, thanks for the detailed explanation. Have been wondering this for a while.

So if I read it correctly the major difference between the two platform is the opt-in/out part.

On iOS I can sniffer some random small apps trivially, since most of them don't enable pinning; on the other hand for android it's default on ( so I have to manually patch the apks everytime.

IIRC "have to opt in to loading user-imported certificates" wasn't the case a few generations of Android ago, correct?


Correct, this changed in Android 7.


On the other hand if the app does use cert pinning, it's much easier on Android because we have https://github.com/shroudedcode/apk-mitm


If you're already patching the APK, couldn't you also patch it to trust the user store, even if the app doesn't use cert pinning?


That’s true.


what tools/methods exist for MitM-ing iphone traffic in this case?


Any tool? Install a profile, enable your cert, set a proxy, off you go. Charles, mitmproxy, Fiddler, whatever you prefer.


Don't tons of apps now use cert pinning though?


"Browsers receiving this traffic enforce that all certificates they receive come with a matching SCT, signed by a log provider they trust."

Interesting the word is "they" and not "you". Assuming "they" means the "tech" companies that provide these browsers and "you" means the computer owner.

Computer owners are usually given the run-time option to remove "trusted" root certificates that are pre-installed with browsers like Chrome. That is, remove them from the current list of trusted root certificates, not remove them from the source code. In a more perfect world, more computer owners could compile their own browsers,[FN1] thereby giving them the opportunity (freedom of choice) to remove untrusted certificates from the source code, as well as to add their own. Not to mention make other useful changes suitable to their own needs.

Can the computer owner remove a "trusted" log provider.

Can the computer owner add their own log provider.

FN1. I prefer to rely on a localhost proxy to perform TLS instead of the browser. One benefit is that I can read, edit and compile the proxy source code myself, quickly and easily. Unlike the graphical browser from the online ad services "tech" company, the author(s) of the proxy are not compromised by a pecuniary interest in selling and delivering programmatic advertising services, and the ability to use an in-house browser to support that pernicious endeavour. In using a proxy, I am not having to fight against the interests of the paternalistic browser vendor in order to protect my own.


> I prefer to rely on a proxy to perform TLS instead of the browser.

That's one step forward and about 30 steps backwards if you're actually doing that for security. Proxies silently accept broken TLS configuration all the time and serve then to you as https secured. You're unlikely to encounter invalid https configurations nowadays, so you likely won't ever notice, but it's definitely less secure to break the TLS connection in the proxy


> Proxies silently accept broken TLS configuration all the time

I don't want the browser to enforce TLS configuration; the proxy could be configurable to set it how I want it to accept or not accept broken TLS configurations.


Would be interested to see a list of those "about 30" steps. Surely, the number is neither made-up nor arbitrary.


I am not using a localhost proxy for "security". I do not use the proxy when performing any sort of commercial transaction or other important transaction using a graphical web browser issued by a "tech" company. That usage comprises a very small portion of overall computer use. I normally use TCP clients for making HTTP requests and a text-only browser for reading HTML.


> I prefer to rely on a localhost proxy to perform TLS instead of the browser.

I also would want to do this (it would be more efficient than needing to decrypt and encrypt it twice), but unfortunately the options for the proxy configuration does not seem to allow that.


I do not use the browser's proxy configuration. I use localhost DNS to direct HTTP traffic to the proxy. Alternatively one could use firewall rules.


That won't work for HTTPS unless you decrypt and encrypt it, which is less efficient, or not use HTTPS URLs, which will break many other things (and also has some inefficiencies when using with stuff that does use HTTPS), etc.


Efficiency is not the goal in my case. The localhost forward proxy does decrypt then re-encrypt according to how I have written the configuration, however that is exactly what I want because I need to be able to examine and manipulate HTTP requests and response bodies, among other things. Among the other things are, for example, specifying an acceptable TLS configuration, e.g., do not send SNI by default to all sites, use TLS1.3 only for sites that support it, and use ESNI for Cloudflare sites. These are options that a "modern" browser does not present to the user. If this setup was noticeabely slow I would not use it. IMO, it is faster than mitmproxy and requires less resources.

The proxy also converts http to https so all requests get encrypted regardless of which HTTP method is specified. "HTTPS everywhere" but not only for accessing www sites with a "modern" web browser but for any program with network access making DNS lookups and trying to make HTTP requests.

I generally do not use a "modern" browser. More often I use TCP clients that have no support for TLS. It is unlikely this setup would suit other computer users but it works for me.


If you don't mind, which proxy software(s) do you use for this?

In the past privoxy has been good but it seems quite neglected recently, especially wrt TLS-related functionality.


Haproxy


Ya know what, good. It sucks that dev tools caught in the crossfire here but anything to put another nail in the broken corporate mitm "security" appliances is a huge win. Along with encrypted DNS we might actually reach nirvana of "either give me a clean connection to the public internet or don't but no stupid half broken middle."


Do you know what other thing you've just put into a nailed coffin? The ability to inspect traffic your own device is making. So now Google and other nasty corporations get to decide what they send back to their servers without a possibility of you ever finding out.

Be careful what you wish for.


But this was already possible, hitching your “I have control over my own device” wagon to “My device allows me to x, y, or z” that are only possible due to history, backwards compatibility, or because not enough people actually do it to be a problem is a battle already lost.

Winning the war requires legislation that demands devices and software be introspectable by all users, not just ones that can set up cheeky mitm proxies.


But they're not cheeky MITM proxies, it's the way HTTP traffic inspection is done. Even if the right to inspect your traffic was legislated, it would need to be compatible with existing MITM tooling.

Why am I suddenly getting the "just go get some legislation" treatment? I could just as well give you a lesson about how trying to prevent corporate MITM middleboxes with technological means is a lost cause and you should just work on getting some legislation to prevent it.


The difference, to me, is that eliminating the ability for someone who isn't the operator of a website to present a valid cert for that website is an improvement for security and reliability.

> it would need to be compatible with existing MITM tooling

In my ideal world it wouldn't be, it would be done on the endpoint before/after the traffic is encrypted/decrypted. There would be no need to mitm anything, the OS would happily show you the content and be legally required to provide facilities for the user/software to do so.

Regulating away mitm proxies doesn't make sense because we don't need to do it, you can prevent middleboxes with nothing other than tech by breaking the ability to mitm connections.


> Regulating away mitm proxies doesn't make sense because we don't need to do it, you can prevent middleboxes with nothing other than tech by breaking the ability to mitm connections.

You can, because you're talking about middleboxes. But you can't really prevent the owner of the device from MITM-ing traffic, you can just make their life needlessly harder. Or you can attempt to make them not be the owner of the device, so that they are not fully in control, which is unacceptable.

I agree middleboxes shouldn't exist, but the only reason they are able to is because you're not the owner of the device you're communicating from. That's a problem you can solve with legislation.

> In my ideal world it wouldn't be, it would be done on the endpoint before/after the traffic is encrypted/decrypted. There would be no need to mitm anything, the OS would happily show you the content and be legally required to provide facilities for the user/software to do so.

This sounds technically unfeasible. HTTP can be done by any number of userland libraries. How is the OS to ensure that all such libraries are compliant?

On top of that, you're talking about the creation of a new kind of protocol for this kind of thing here. There's an insane amount of tooling currently using HTTP proxies for this which cannot be easily replaced.


"either give me a clean connection to the public internet or don't but no stupid half broken middle."

Except the "me" in that sentence isn't you, it's whatever apps you have installed.

But what alternative would you propose?


We might even be able to use SCTP or TCP Fast Open in the real world once those boxes start becoming obsolete.


As a network security person, if you can't MitM, then monitoring and filtering will simply move to the endpoint.

Monitoring is an absolute necessity and positive thing on certain networks.


> then monitoring and filtering will simply move to the endpoint

Good. That's where it belongs.


Except for endpoints where the user/owner has zero control.

Think your smart TV or Chromecast. Suddenly, they can do anything they want and you cannot stop them.


Devices you can't control are also a problem, but the endpoints are still the right places to implement filtering. You can't guarantee access to the data anyway, as they can always encrypt the content independently of TLS. Though they're more likely to pin their own certificates so they can't be MitM'd and simply refuse to operate in a network environment hostile to end-to-end encryption.

It's best to just wall untrusted devices off from the rest of the network so they can access the Internet as required to do their job but not interact with any of your other devices. Or alternatively, replace them with open-source devices you do control.


You're describing the world everyone wants. I would much rather OS's move to a system with a filtering API so I can get real errors like "connection not allowed by local security policy" instead of pretending like it works and then dropping packets or getting garbage responses from the appliance pretending to be my server.


Of course what we'll actually get is networks which require[0] your OS to attest that you are running in Secure Boot mode, so the network can ensure you are running an "approved" OS that prevents you from running VPNs or Tor or bittorrent or E2EE messengers...

[0] https://arstechnica.com/gaming/2021/09/riot-games-anti-cheat...


Ok, but filter on what criteria? If the connection is encrypted, how do you know what you should filter for?


the idea is that device traffic would be inspected by the OS via some subsystem that encrypts/decrypts application traffic. I'm talking out of my butt here, I am not an OS person or a dev.

I imagine instead of the web browser encrypting traffic before sending it on the wire, it would send it in the clear to a process on the OS ("Endec"? I'm trying to think of some word like codec or modem for encrypt/decrypt).

This process would be the hub for all endpoint encrypt-decrypt operations, and the place where all apps would trust to do the work. That way, inspection tools desired by the user (or in corp land, the admin) could hook in and do filtering.

Applications that don't want this, such as say, Signal or other hyper-privacy tools, could choose their own trust store and bypass it, if permitted by the OS admin. Otherwise, corps could block raw access to the NIC.


Hey, I hear you. That just means I'll have to get good at a different UI!


I was kind of believing that most apps would use certificate pinning anyway, so I was kinda surprised manipulating the system store is actually workable.

Though if modifying the system store is indeed officially "unsupported" my guess is it's only a matter of time before CT is enforced by the standard Android TLS API and will apply to apps as well.

In which case I guess the next step would be... Add a fake CT log in addition to the fake root CA?

But anyway, stuff like this confirms my impression that Android sides with app developers more than it sides with users when it comes to analysing traffic of your own devices.


Certificate pinning is a big problem for corporate environments: large companies install CA certificates in their endpoints to allow centralized traffic inspection. Apps that enforce certificate pining cannot operate properly in these networks.

It can be a desired function sometimes (e.g., a bank that wants to protect its customers) but in most situations it comes back at their face (i.e., bank customer wants to manage his bank account from his work office).

About your conclusion, I fully agree with you. It is not about protecting users but about protecting Google. Let's not ignore the other fact that Chrome started hiding some requests from its Network panel (e.g., CORS) for "our own good", which makes network-layer inspection even more necessary.


Google keeps seeming like an advanced persistent threat to an understandable world. More and more effort keeps getting poured into insuring software takes precedence over humanity, that we get no say.

The recent banning of sideloaded accessibility apps is another blood curddling cry against agency, another slamming shut of the door. This totalization of security concerns is such a horrifying behavior to have emerged in the past half decade, especially from a company so strongly linked to the web and which used to have such clear positive values.


The "ban" on externally installed accessibility apps is nothing more than two extra button presses to enable APIs that are used by real-world malware to steal money. Alternative app stores like F-Droid are exempt from the change, assuming they use the correct API, and apps you manually installed can still be given the necessary permissions.

It's just harder to do when you don't know what you're doing, which is a good thing in my opinion.


If it were just two extra button presses then it would be fine. The problem is that if you try to use a sideloaded accessibility app without pressing them, the error Google gives you tells you that it's completely impossible, and doesn't mention the button presses.


That is less bad than I thought.

Manifest v3 & it's forbidding of dynamic code is another major treason, in my view, an uncompromising & cruel stance to force upon the web. I'm less knowledgable here, but I also feel like there's a bit of a hostile relationship with works like Magisk on Android, which have long had an uneasy relationship, albeit the recent v24 & it's new Zygisk zygote injection shows a lot of health & excitement right now. The ever encroaching desire to drive top-down control is highly visible in SafetyNet, which make it clear the device in your hand serves corporations, not you.


I agree with you on Manifest V3, the restrictions are somewhat understandable if Google didn't stand to gain so much from blocking the behaviour they now restrict.

The Magisk project is kind of a weird one, I 100% expected Google to neuter that when the dev behind it got hired but it's clear that that's not happening.

SafetyNet is a requirement for almost every media company out there. Android would die a quick death if Netflix, Disney+, and friends would suddenly stop working because Google turned good and disabled SafetyNet. There are some enterprise advantages to SafetyNet as well, sometimes you want to be sure that some internal applications work on phones with internal security intact only.

As always, if you don't like the product, vote with your wallet, or in this case your data. Don't use Chrome/ium, don't sign into your Google account, download from F-Droid and Aurora exclusively and root your device if you wish. Firefox is still a decent mobile browser, despite Mozilla's efforts to change that.

/e/ is an excellent replacement for almost the entire Android ecosystem and I think if that became popular among non-techies, Google might start to listen. It might also not and Android might be doomed, but it's worth a try.


> Firefox is still a decent mobile browser

It really is not at this point in time :(


If by Manifest v3 you're referring to declarativeNetRequest, then that's supported dynamic rules since 2019[1][2].

If you mean disallowing importing remote code, that's to prevent malware from hiding in Chrome extensions until after being published.

[1] https://developer.chrome.com/docs/extensions/reference/decla...

[2] https://blog.chromium.org/2019/06/web-request-and-declarativ...


It's not just disallowing imported code, it's disallowing all dynamic code. This prohibits, for example, Greasemonkey/VioletMonkey/Tampermonkey, or any kind of extension that has dynamic behaviors.

It's a prime example of draconian security absolutism, and it's vile & detestable & anti-human. Enforcing this not just on their store, but on the web & extensions in general, is an outrage.


That seems a liiittle hyperbolic. In any case, power user tools like Tampermonkey will seemingly be supported. Whether that be through special exceptions or new APIs remains unclear. Personally I'd like to see integration with Local Overrides.

https://github.com/Tampermonkey/tampermonkey/issues/644#issu...


Preventing any kind of dynamic agency from growing on the web is one of the most severe threats to user-agency I can imagine. It's a direct strike at one of the most core distinctions that makes the web different from everything else. I really believe strongly that the web will advance once we start making more adaptive scripts/extensions, scripts that can gain & accrue capabilities, and this directly prevents advance.

V2 extensions are no longer allowed but there's still no progress or path for Tampermonkey to even experiment with.


declarativeNetRequest still doesn't have capabilities sufficient to reimplement uBlock Origin functionality. Besides that, it puts the power to decide which blocking capabilities are even possible strictly in Google's hands.

Given that, it's quite clear it's a malicious move to take control away from the user.


Yea I see this as a good thing assuming it is not a stepping stone to banning sideloading in general. Which I do not think they are going to do.


On the other hand, if Google vanished today, the only effect it'd have on my life is that I'd watch something other than YouTube.


Lucky you

So no email you have from an employer, school, sideproject, personal is gmail based? No shared document you are storing or access is google docs based? At least everyone making calendar invites will quickly move to something else

I’m just really surprised you’ve managed to do this and are content


In my personal life, YouTube is the only thing that seems irreplaceable. My contacts are in Google as well, but i have non Google for all backups and email and search.

In fact, I'm considering a switch to Apple just to try out a completely deGoogled routine (barring work email).


I actually de-Googled myself very recently. Got my own domain, own email, and use SyncThing between devices.


The only effect it would have is that tons and tons of people would suddenly be asking you for help on how to live without their Google handholding overlords :-D


Well it'd be gone anyway, I'd just use one of my other emails.


>Google keeps seeming like an advanced persistent threat to an understandable world. More and more effort keeps getting poured into insuring software takes precedence over humanity, that we get no say.

Truth. However this is a blip on the radar compared to the treacherous monstrosity that is the play integrity api

https://developer.android.com/google/play/integrity


An issue not mentioned in this is that at the office it is routine to MITM TLS connections, what some call "TLS inspection".[FN1]

There are important reasons for performing TLS inspection aside from "developers testing their smartphone app" or "security research".

An employer should want to see the contents of what is traversing the employer's network. The employer owns the network so she gets to decide.

A home computer user should want to see the contents of what is traversing the home computer user's network. The home computer user owns the network so she gets to decide.

Anything, apps from "tech" companies, that interferes with the ability of the network owner to see the contents of that traffic is a threat.

FN1.

https://security.stackexchange.com/questions/107542/is-it-co...

https://fak3r.com/2015/07/22/your-employer-runs-ssl-mitm-att...

https://www.quora.com/Why-are-companies-trying-to-inspect-SS...

https://it.slashdot.org/story/14/03/05/1724237/ask-slashdot-...

https://www.schneier.com/blog/archives/2019/11/the_nsa_warns...

https://attack.mitre.org/mitigations/M1020/


Are you okay if your ISP starts MITM'ing all of your TLS traffic, since they own the network you're connecting to?


Who owns the data.


Can I simply not choose the CT log I want to use and host my own CT log with my certs in there? If I can't doesn't this mean this effectively makes it so my cert has to be in Google's CT logs to be valid.


that's super anoying, as of some time you do not see cors requests in dev tools and basically only way to debug those issues was to use mitmproxy, and that's now also unnecessary complicated

There is also env SSLKEYLOGFILE, that you can use on connection with Wireshark, but I didn't tested that yet with chrome

I understand why it's nice from security point of view, but adding option to disable those in chrome://flags would be much better way


> I understand why it's nice from security point of view

With no way to disable this, it seems more about the security of the apps than about the security of the user.


so this breaks charles proxy HTTPS sniffing as well? I haven't encountered the problem yet even though my Android Chrome is version 101


It doesn't break Charles Proxy unless you installed your CA cert in a method that is typically used by httptoolkit (installing in the system store).

What is broken is installing a custom CA into the system store on a rooted phone and making it work with all apps (apks) and Chrome.

If you install the custom CA into the user store it'll still work with Chrome.

If you want to use Charles to inspect the HTTPS traffic of an app you are developing then you continue to follow the instructions from https://www.charlesproxy.com/documentation/using-charles/ssl... to configure your test build to use the user store CA certs.

If you want to use Charles to inspect apps from other developers then you need to rebuild them to trust the user store just like you would if you were developing the app yourself. Use https://github.com/shroudedcode/apk-mitm to automate that process.

httptoolkit uses the method they do because it was the easiest way to get setup to inspect everything. Its tedious to get every app setup to trust the user store.


> What is broken is installing a custom CA into the system store on a rooted phone and making it work with all apps (apks) and Chrome.

yep, that's what I do. still seems to work here though. I'm scared to reboot


But how will zScaler provide extra security for your corporate apps on Android now?


Jailbroken androids for the enterprise! What could possibly go wrong?


The flag he mentions seems like a reasonable way to support the debugging use case. It's more to setup, but people doing this should already be using automation to install the cert, etc.


"HTTP Toolkit gives you one-click HTTP(S) interception, inspection & mocking for any Android app."

There's kind of a vested interest here.

It would probably be sufficient to allow cert bypass in a desktop Android phone emulator, such as Android Studio. That's intended for debug and test. Nobody uses that for non-debug use by mistake.


Isn't there an enterprise policy for disabling CT for certain CAs in Chrome?


I believe there is for desktop but I am not sure on mobile. and I am not entirely sure if (without the browser enrolled) it would even be possible for the end user to control this.


How long before Google only accepts Google signed certificates. Everything must and shall be placed within the Google ecosystem. Not good.


CT is a PKI ecosystem thing, not a Google thing. Google hosts one of the CT logs for HTTPS certificates, but so do other major companies: Cloudflare and DigiCert come to mind. Let's Encrypt even has one.


This and side-loaded accessibility threads are making me rethink how stupid people are. Please read and try to understand before typing something on your keyboard that will make you look like a stupid.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: