> Motivation: Users often depend on websites trusting the client environment they run in.
Aka corporations insist on control & want to make sure users are powerless when using the site. And Chrome is absolutely here to help the megacorp's radically progress the War On General Purpose Computing and make sure users are safe & securely tied to environments where they are powerless.
There's notably absolutely no discussion or mention of what kind of checks an attestation authority might give, other than "maybe Google Play might attest for the environment" as a throwaway abstract example with no details. Any browser could do whatever they want with this spec, go as afar as they want to say, yes, this is a pristine development environment. If you open DevTools, Google will probably fail you.
It appalls me to imagine how much time & mind-warping it must have taken to concoct such a banal "user motivation" statement as this. This is by the far the lowest & most sold-out passed-over bullshit I have ever seen from Chrome, who generally I actually really do trust to be doing good & who I look forward to hearing more from.
I reject having only short, one dimensional views.
Generally I am pro Project Fugu & pro building bigger better web. Google spends an enormous amount of effort working on specs with w3c, wicg, and other browser implementers advancing incredibly good & useful causes. They spend huge effort enhancing DevTools so everyone can work the web.
Building a good & capable web is necessary for Google to survive. An open & capable web is the only sustainable viable alternative the world has seen to closed proprietary systems, which from history we can see have far more risks hazards & entailed pernicious or particular behaviors.
Generally Googles effort to make the web a good viable & healthy platform aligns with my vision. That they want to do good things & make a great connected world wide web because the web's thriving helps them run their advertising business typically does not create a big conflict for me. I'm usually happy with the patronage the web receives & I dread it ever drying up, and it saddens me people are so monofocused, so selective in focusing on only on bad, and I think that perception hurts us all.
I agree to an extend that you shouldn’t focus on bad only, but as the old saying goes “so much is lost for the lack of a little more.”
What my experience has taught me is that you have these 80% things that are good, but there is the one person or thing that ruins it for everyone. One person, one manager or CEO who pushes something through because he wants some gain, or one selfish move that is born out of short term profit or thinking.
From climate change, to wars, to ill-willed software, history sometimes get bend by those bad decisions sometimes stemming from a comparatively small but powerful group who yield too much power. Google is for all purposes a monopoly which makes all their decisions at least suspect since they aren’t competing on the same level as a Mozilla, or name any other search engine. This is bad for any ecosystem.
I wish I was still seeing the early Google that was optimistic, people focused, approachable, but that time is at least some years in the past. There are probably good people working for Google still with that ethos, but it gets overshadowed by those nagging decisions that are suspect.
Agreed & uovoted. I do tend to think it's incredible what a massive impact small influences can have.
Thankfully the web still is a very multiparty system, with various w3c group reviews & various implementer signals all being registered well ahead of time. Comments on blink-dev were strong & fast. Unlike almost every other system on the planet I think the mediation here is real & strong!
In general, Googlers tend to be in favor of initiatives like this.
You have to remember, from their point of view they are writing the web software and when a user agent is non-compliant, it gets in their way. UAs with weird quirks translate to impossible-to-reproduce bugs, so the default bias is in favor of standardization and regularity.
You do not, the user is responsible for the operation of their device. Most of the time this should be caught by whatever malicious software detector the user runs. Also, Chrome and Firefox very heavily guard against extensions being installed from outside of the usual way, i.e. by outside programs.
> You do not, the user is responsible for the operation of their device.
As time goes on hand-waving the matter as "user's responsibility" is becoming a less and less acceptable answer. Hard assurances are being demanded and applied technologies are progressively patching the existing loopholes.
Organization executives and lawmakers are increasingly demanding that digital services be made un-hackable. Someone with an attitude and trying to shirk duty by claiming we just have to trust that all of the users will always be responsible and non-abusive all of the time, will at best be laughed and shooed out of the room. More realistically be given a final PIP. Telling your bosses "no I'm not going to do that" is a resume generating event.
Both groups of people who have no direct understanding of how any of this works.
You can demand change all you want but it doesn't change how the real world works. These people need to come off their high horse and come join the rest of us. So sick and tired of C-level people demanding shit they know nothing about.
Why do you, as a website owner, think that it is your responsibility to protect your users from mistyping the name of Python packages they are installing via pip?
At some point you don't. The cure becomes worse than the disease. Maybe if you could give users the option to enable it. But let's be honest, if this ships every bank will require it. Good luck checking your balance on Linux or a rooted Android phone. You will get an approved operating system to keep your cash under your bed.
There it is, the AI scraping detector. The hints in the text are obvious:
"This trust may assume that the client environment is honest about certain aspects of itself, keeps user data and intellectual property secure."
The smoking gun is "intellectual property". In a conventional browsing session the website has no idea what the human user is going to do with copyright-protected information published on the website. Hence, it assumes good intent and grants open access.
In the case of an AI scraper, assuming you detect it reliably, the opposite is true. Bad intent is assumed as the very point of most AI scrapers is to harvest your content with zero regard for permission, copyright or compensation.
To make this work, Google outsources the legal liability of distinguishing between a human and a bot to an "attester", which might be Cloudflare. Whatever Cloudflare's practice is to make this call will of course never be transparent, but surely must involve fingerprinting and historical record keeping of your behavior.
You won't have a choice and nobody is liable. Clever!
Not to mention the extra new avenue created for false positives where you randomly lose all your shit and access, and nobody will explain why. Or, a new authoritarian layer that can be used for political purposes to shut down a digital life entirely.
All of this coming from Google, the scraping company.
I have a much simpler solution: it should be illegal to train AI on copyrighted content without permission from the copyright holder. Training AI is not the same thing as consuming information, it's a radically new use case.
Lots of people doom and gloom here about threats to user privacy and freedom.
This is the one I'd be worried about. Thought it was annoying to not be able to use banking apps on a rooted Android? Think about how annoying it will be when you can't do much of anything, even on the Web, unless it's from a sealed, signed Apple/Google/Microsoft image-based OS...
I realize the way Firefox's user share is going, it might not matter or they might feel they don't have a choice but I really, really hope Mozilla doesn't even remotely consider implementing this.
Firefox doesn't allow users to install unsigned extensions unless they use a beta version, because users apparently can't be trusted to install software. I trust Mozilla to fight for privacy (they're great at it), but I do not trust them in the slightest to fight for user freedom (like accessing banking sites on an "insecure" OS).
The frustrating thing is that this is both the final nail in the coffin for computing freedom, while also having a legitimate use case. I'm seeing new banks that flat out do not have a web UI at all. The reality is that desktop OSs and browsers have done nothing to stop the fact that it is trivial for a regular person to accidentally install malware which is completely transparent.
Online fraud and theft is exploding right now and the average person is simply not capable of securing a laptop so the companies have decided to only allow secure access through a phone which can usually be trusted to be malware free.
And in 100 years you will need to have your brain scanned to withdraw cash. The process will validate both your identity and that you aren't being coerced.
It has to stop somewhere. 100% security may reduce the banks' fraud costs but it isn't acceptable for personal freedom. "Choose a different bank then" only works until all they all adopt it.
The banks aren't the ones taking the loss for scams since their system doesn't have any faults, it's you or your computer that authorized the translation. I can see the reasoning for the push to more secure transactions. We constantly have people being scammed of their life savings due to sophisticated attacks beyond their understanding.
I assume an old person cares about not being left poor and helpless in retirement more than they care about free software and computing freedom.
I think it's probably likely that we will end up in a situation where some devices like phones and maybe laptops are considered "secure environments" where banking transactions and such can be safely executed, while alternative devices will be available for complete freedom and tinkering. You'll likely always be able to run any program you want on your laptop but those programs will be limited to their own sandbox rather than having free access to any other programs data.
I agree, sort of -- I still think it's a farce. Unless this is implemented in a way that has a checklist that is updated so frequently as to force Windows users to do what they're often notorious for LOUDLY refusing to do... then it's more theatre.
As long as Windows users are allowed to remain as out of date on patches as they are, and depending on what the browser users as its attestation "source", I don't see how the browser and website can ever meaningfully establish the validity of the statement "the client is trusted to be malware free".
I wish the answer was that MS would secure Windows better. Sandboxing applications, and making it a pain in the ass to request high privilege functions. The current state of things is you just get a useless popup to grant admin access which literally every program requests so as a user you have no real tools to combat malware.
It's too hard for even someone who is highly knowledgeable to know if they have malware, let alone the average person.
> but I really, really hope Mozilla doesn't even remotely consider implementing this.
Apologies for the simple question, but wouldn't forks of popular browsers crop up without this attestation API implemented? Or is it a thing where websites themselves would potentially refuse traffic from browsers that didn't support it?
You're right about the market share; I'm not sure about the motivation. Apple has the capacity to be an attester in this system, since they are an OS and hardware vendor. And while they care some about privacy (in as much as it's a marketing point), they manifestly don't care about user freedom (on iOS, and increasingly on MacOS). I think as long as their code running in a secure enclave is an acceptable attester to this API (which it will be), they don't have any motivation to oppose it.
Do you realize the amount of work that Google has put in over the years to provide Linux support for Google Chrome? Why would they suddenly about face on that?
Wouldn't it be great if you never had to deal with another captcha?
> Wouldn't it be great if you never had to deal with another captcha?
I run a custom build of Firefox, on a (somewhat, still-ish) niche Linux OS, with the kernel and bootloader signed by my own signing keys. What could I attest with, that will make some banking website perceive me as a trustworthy client?
The second this becomes widely available, it won't mean "bypass captchas" - it will mean "can't bank unless you use up-to-date Android or latest iOS".
These things Google has been announcing will culminate in an inhuman level of oppression of our digital lives and might irreparably damage people's sense of ownership and sovereignty over their own personal electronic devices.
Gluttony, greed, envy, and arrogance. This is truly sickening.
These proposals appear to be coming from the W3C Anti-Fraud Community Group. They haven't identified even a single use case[1] of the technologies they're trying to push onto the world being misused and abused. Use cases and their naivety appear to be largely copied from the OWASP Automated Threats to Web Applications[2].
There are no use case about these technologies being used by a dystopian country. No use case about enabling anti-competitive practices from incumbent companies. Seemingly little to no care or attempts to balance the longer term strategic impacts of these technologies on society, such as loss of innovation or greater fragility due to increased centralisation/monopolisation of technology. No cost-benefit analysis or historical analysis for identified threat actors likelihood to compromise TPMs and attested operating systems to avoid these technologies (there's no shortage of Widevine L1 content out there on the Internet). No environmental impact consideration for blacklisting devices and having them all thrown into a rubbish tip too early in their lifespan. No political/sovereignty consideration to whether people around the world will accept a handful of American technology companies to be in control of everything, and whether that would push to the rest of the world to abandon American technology.
The majority of the contributors to these projects appear to be tech employees of large technology companies seemingly without experience outside of this bubble. Discussions within the group at times self-identify this naivety. The group appears very hasty to propose the most drastic, impractical technical security controls with significant negative impacts such as whitelisting device hardware and software. But in the real world for e.g. banking fraud, attacks typically occur through social engineering where the group's proposed technical controls wouldn't help. There appears to be little to no attempt made to consider more effective real world security controls with fewer negative impacts, such as delaying transactions and notifying users through multiple channels to ensure users have had a chance to validate a transaction or "cool off".
On the explainer page [1], the first use case example is to prevent ad fraud (and, presumably, ad blocking...):
> Some examples of scenarios where users depend on client trust include:
> Users like visiting websites that are expensive to create and maintain, but they often want or need to do it without paying directly. These websites fund themselves with ads, but the advertisers can only afford to pay for humans to see the ads, rather than robots. This creates a need for human users to prove to websites that they're human, sometimes through tasks like challenges or logins.
So it's essentially Google further entrenching its tentacles in web standards in the most invasive ways with no regards towards privacy and user control. It's a shame what the W3C has degenerated into.
It's sad that "prevent fraud" is the supposed benefit, when most fraud happens via phishing and social engineering rather than a technical exploit. And yet this is the way it would be sold to the unknowing public.
It's "think of the children!" way of arguing for intrusions and surveillance.
This probably isn't the best analogy to make the case you're trying to make. Agents in real life don't just blindly do whatever any customer asks. They actually have some standards and boundaries they have to observe, including ensuring integrity in their dealings on behalf of the customer. (To be clear I'm not endorsing the proposal, just commenting on the analogy.)
I'm aware of the history of the term. It's not an accurate statement of fact if the browser isn't acting on behalf of or towards the interests of the user.
I agree with you, but in regards to Chrome, thats been the case for at least 10 years, so I'm not sure what good pointing it out now will do. that ship has sailed, and aint comin back. better would be to point people toward better options, if they exist.
Bot traffic? Anyone using Linux will get blocked because "they can't be trusted". Only people running an "approved" operating system from a billion dollar corporation will be allowed to access.
This is already what is happening with SafetyNet on Android. For now most applications don't require hardware attestation so you can pass by spoofing an old device that didn't support hardware attestation but I'm sure that will change within a decade.
Can confirm that it's palpable the segregation and poor treatment of Linux users. I need to make my browser talk as if it was on Windows just for websites to not treat me as garbage.
Look, it isn't that bad, but enough to make me do it. It's obnoxious.
You don't have to be a billion dollar corporation to become Play Protect certified.
Being able to trust the security of a client can protect against many attacks and it is up to web sites to evaluate what to do with into information that a client is proven to be secure.
- What is the least expensive device that can be certified like that? The least expensive process?
- What is the highest level of openness such a device can offer to the user, and why?
To my mind, it would be best to have an option of a completely locked down and certified hardware token, a device like a Yubikey, that could talk to my laptop, desktop, phone, or any other computing device using a standard protocol. As long as it's unforgeable, the rest of the system can be much. much less secure, without compromising the overall security.
>AKA as long as you don't give control to the user.
A system being secure doesn't mean that the user doesn't have control. The operating system should allow the user to control it, but only in a secure way that doesn't compromise the rest of the security of the system. The Windows way of having an administrator account or Linux of having a root account given to the user has been proven over time to be worse for security. Windows has been trying to roll back this mistake, but most Linux distributions don't do anything because they don't care that much about security compared to an operating system like Android.
Sure, in theory it doesn't but in practice it does.
I wanted to extract some data files from an app I was using and Google's Android told me that I was not allowed to do that. That was the apps data not my data.
It doesn't really matter root/fine grained permissions. The fact is that on stock Pixel phones the user can't access whatever data they want. So in practice they don't have control.
That same ability makes it possible for 2FA apps to exist since the secrets can't be copied, turning the factor into something you know instead of something you have. Additionally just because someone is using a device that doesn't mean that the current user is the owner of the device.
Google Authenticator lets you export your entire set of secrets as a QR code. In fact, you can even store them on Google's servers. Though I have no clue why you would do this instead of just printing out the QR code and storing it in a lockbox...
Furthermore nothing prevents you from just taking pictures of the individual enrollment keys and printing those out either.
If you want TOTP 2FA that actually follows a one key per device policy you need to buy hardware tokens with some kind of out-of-band keying mechanism and enroll those. Then your problem changes from "how to stop people from copying my 2FA tokens" to "how to not get locked out of my account when my 2FA key device breaks."
SafetyNet means the app checks to make sure you're not rooted or running a custom ROM because those are considered a security risk. If you are not running a locked-down OEM ROM, you can't run many apps including banking apps.
Microsoft's Pluton on-CPU attestation technology means this is coming to PCs.
I am talking about "Play Protect certification." SafetyNet is deprectaed and has been replaced with the Play Integrity API.
>means the app checks to make sure you're not rooted or running a custom ROM
The purpose is to be able to tell if the user is running a version of the app is from the play store or to be able to tell if the device's integrity isn't compromised meaning that it can not rely on the security guarantees the OS provides. Banking apps are not against people using custom ROMs. They just want to ensure they are running on a secure operating system.
It could be good if it was my choice. But I actually want to be able to access my bank from my computer running open source software where I can modify configuration and apply patches.
I don't want to have to agree to Microsoft or Apple's ToS so that I can access my bank.
I do not look forward to trying to find a bank that doesn't require this of me because all of the major banks have jumped on board.
Usually banks don't let you disable antifraud protections. They prefer to make their business and the banking system more secure by reducing the rate of fraud. Fraud is expensive for them to deal with so it doesn't really make financial sense to let customers say that they are okay with having more fraud happen using their account.
>So the server is wildly insecure and wants to make it my problem.
Take for example a simple spam bot. The bot authenticates and then starts sending spam to people. Detecting spam and spammers server side is an imperfect art. It is a constant game of doing things to reduce the rate of spam. It can help a lot if you can ensure that only your client is able to work with your service. This means that attackers can't just write some python script and deploy it somewhere. They have to actually be running your app and actually liking the content in the app. This increases the costs for attackers and reduces the amount of spam.
Are you sure? He was the one who green-lighted Encrypted Media Extensions, the earlier, unfortunately successful attempt to shoehorn proprietary DRM blob into browsers.
DRM blobs were already in browsers, it's the only reason why Hollywood let streaming services have websites at all. First it was trojan-horsed through Flash Player and Silverlight, and then individual browsers all licensed or built their own solutions[0] to make "plugin-free" DRM happen.
The attitude of the W3C was basically "we either kiss the ring or Hollywood forks us". So I can totally imagine Tim Berners-Lee spinning in his nonexistent grave then too. That doesn't mean he's Stallman levels of freedom-or-death.
[0] AFAIK, Google bought Widevine, Apple uses FairPlay, and Mozilla originally used Adobe but now uses Chrome's Widevine library.
>6.1.1. Secure context only
Web environment integrity MUST only be enabled in a secure context. This is to ensure that the website is not spoofed.
Todo
do they realize that you can use a custom certificate / patch the check routines? I don't think they quite realize what they are even suggesting.
You are the one being naive. This will be a cryptographically signed stack from the TPM, to the bootloader to the OS to the browser. If you flip a single bit away from the "approved" that signature will fail.
I'm not sure about this. TPMs can provide valuable features such as non-bruteforcable disk encryption and other secret management and secure boot can be valuable protection for your devices. The real problem here is that this is allowing a third-party to verify what software you are running. Doing these things on my device by my choice is one thing. Having another party require that I am using a specific unmodified software stack is another.
I'm surprised the ad corps haven't forked the internet yet: special drm-ed websites accessible only via special drm-ed browsers. At least it would relieve those who want to share knowledge from the presence of those who sell addiction.
The whole point of things like this is to force the open internet to be the one to fork away. The network effect is solved by having enough money to take over an existing network.
Aka corporations insist on control & want to make sure users are powerless when using the site. And Chrome is absolutely here to help the megacorp's radically progress the War On General Purpose Computing and make sure users are safe & securely tied to environments where they are powerless.
There's notably absolutely no discussion or mention of what kind of checks an attestation authority might give, other than "maybe Google Play might attest for the environment" as a throwaway abstract example with no details. Any browser could do whatever they want with this spec, go as afar as they want to say, yes, this is a pristine development environment. If you open DevTools, Google will probably fail you.
It appalls me to imagine how much time & mind-warping it must have taken to concoct such a banal "user motivation" statement as this. This is by the far the lowest & most sold-out passed-over bullshit I have ever seen from Chrome, who generally I actually really do trust to be doing good & who I look forward to hearing more from.