This is especially rich coming from google's, who's 'safetynet' for android results in a significant reduction in security (contrary to its stated purpose): it locks out 3rd-party up-to-date and secure ROMs while allowing horrificly insecure manufacturer-provided ROMs to still pass, because to disable those would cause a massive user outcry. So it functions as a vendor lock-in but no meaningful increase in security for the average user, while preventing more advanced users from improving their security without needing to buy more hardware. This needs to be called out more to push back against the claim that this kind of attestation somehow has a legitimate benefit for the users.
And speaking of user-hostile, locked-down phones...
a galactic irony that Ben Wiser, the Googler who posted this proposal, has a blog where his most recent post is a rant about how he's being unfairly restricted and can't freely run the software he wants on his own device.
It's not clear when his most recent post is; the server says "Last-Modified: Wed, 26 Jul 2023 06:00:31 GMT" but I believe I saw references to this post before that in the current discussion.
(What's with the trend of completely omitting any dates on a blog?)
In uni the mantra from the professors was "put a date and version on everything you write for others".
Students still forgot in the first year but got heavily marked down for it. It quickly got etched into your brain to date and version just about anything you did.
Today when I see an undated blog entry it seriously affects my perception of the writers integrity.
<item>
<title>I just spent £700 to have my own app on my iPhone</title>
<link>
https://benwiser.com/blog/I-just-spent-£700-to-have-my-own-app-on-my-iPhone.html
</link>
<pubDate>2022-03-04T11:30:34.067Z</pubDate>
</item>
Inversely, I hate trying to search for old articles and being unable to find them because something about the websites metadata says a blog from 2004 is from 2018. It makes Google's time window search (and general, research for contemporary views) almost impossible.
Not the same thing. Attestation doesn't mean you can't run software you want on your own phone, which Android allows despite having build attestation APIs.
It means you can, but may then be ostracised from services for having an "unsupported" environment, which is in many ways even worse because it's leveraging peer pressure.
It is my understanding that attestation could be used to control which software is running on the client's computer prior to granting access to a web service, yes?
Otherwise, what would the point be of using to, say, protect DRM content on a webpage if I can just attach a debugger to the process in question?
"The term cognitive distortions has often been used as a general umbrella term to refer to pseudo-justifications and rationalizations for their deviant behavior, and pro-criminal or offense-supporting attitudes (Maruna & Copes, 2004; Maruna & Mann, 2006; Ciardha & Gannon, 2011)." Helmond et al., Criminal Justice and Behavior, 2015, Vol. 42, No. 3, March 2015, 245-262
It seems that almost any software/website can be framed as having a legitimate benefit for users, e.g., increased convenience and/or security.^1 The more pertinent inquiry is what benefit(s) does it have for its author(s). What does it do (as opposed to "what is it"). Let the user draw their own conclusions from the facts.
1. Arguably it could be a distortion to claim these are not mutually exclusive.
We can use web clients that do not leak excessive data that might be collected and used for advertising and tracking by so-called "tech" companies. Google would prefer that we not use such clients. But why not. A so-called "tech" company might frame all non-approved web clients as "bots" and all web usage without disclosing excessive data about the computer user's setup^2 as relating to "fraud". It might frame all web usage as commercial in nature and thus all websites as receptacles for advertising. This "all or nothing" thinking is a classic cognitive distortion.
Exactly! Ironically it's a possible reduction in security on custom roms as well if one chooses to bypass it, which is trivial, but requires rooting the device.
This is especially rich coming from google's, who's 'safetynet' for android results in a significant reduction in security (contrary to its stated purpose): it locks out 3rd-party up-to-date and secure ROMs while allowing horrificly insecure manufacturer-provided ROMs to still pass, because to disable those would cause a massive user outcry.
SafetyNet is deprecated, but it’s just been rolled into Play Integrity which does all the same things. All the same concerns still apply to Play Integrity.
GrapheneOS is asking developers not to use SafetyNet/Play Integrity (because they presumably block GrapheneOS), but instead to use the native hardware attestation API so they can specifically allow GrapheneOS keys. If a developer doesn’t allow their keys, they’ll be blocked.
You're using it wrong. SafetyNet is able to assert that the build the device asserts is what it claims. After you know that, it's up to you to decide whether you trust communications from that build or not. If it's a known-insecure build, you can say that you don't. SafetyNet cannot assert that a third party ROM is what it claims to be, so you have to decide whether you trust communications from that device or not based on not knowing at all what build is on the device.
Potentially, a manufacturer could make a multibuild phone where the user could switch between an attested build and a non-attested build and have access to services whose security requires attestation with just a reboot. Otherwise, you would use different devices for different purposes, as I do today. It's unfortunate, but if you really need something that isn't supported by the existing Android APIs, that's the only way.
Think about "don't use a smartphone" in 2013. That was viable back then.
It isn't anymore. What you can do is live smartphone-lite, using it only as a secondary device (as grandparent suggested). The same will be true in a couple years (if the big G is successful). Until, then, yea, don't use it, actively campaign against it.
If this happens the way google wants, I'll have to have a separate physical box set up specifically to access google's shiternet for things like banking and shopping. I'd be glad to stick to websites that have no need or interest for WEI otherwise.
The internet was already going increasingly-downhill anyway.
I don't use personal devices on corporate networks. If they want a phone with remote attestation, they can pay for it to sit in a drawer.
Though, at this point I am the founder of my own company. Any software we use will not require attestation. I would be willing to switch vendors over that.
As for web attestation: the software I use regularly needs to run on OpenBSD. It's that simple.
> so you have to decide whether you trust communications from that device
"You" in this scenario being, most likely, an engineer at a large, regulated, risk-averse corporation that might have to justify this choice during an audit.
Most (all?) corporate endpoint security systems use it right in my experience. Even when using it right, you would have to block third party builds and cause outcry. You would additionally block some builds that SafetyNet (or Play Integrity) attests.
> Even when using it right, you would have to block third party builds
Unless you have an obvious and accessible way of getting secure third party builds whitelisted, this is still a very anti-user approach, which is not justifiable unless the user of the device isn't its owner (like with company-owned work phones).
That's up to the service to decide on the appropriate level of security risk in whether they allow unknown builds. They already don't allow custom builds on any other mobile OS, so this is really the best you can get as a user. What is your proposed solution?
> They already don't allow custom builds on any other mobile OS ...
Keep in mind that Pinephones and similar are a thing. Lots of people are hoping they don't fizzle out and die off like previous "open" phone projects. :)
And Pinephones and similar don't have apps for these services that require attestation and never will. If some allow web access without build attestation, that works on custom Android builds as well.
> Can we just refuse to implement it?
> Unfortunately, it’s not that simple this time. Any browser choosing not to implement this would not be trusted and any website choosing to use this API could therefore reject users from those browsers. Google also has ways to drive adoptions by websites themselves.
This is true of any contentious browser feature. Choosing not to implement it means your users will sometimes be presented with a worse UX if a website's developers decide to require that feature.
But as a software creator, it's up to you to determine what is best for your customers. If your only hope of not going along with this is having the EU come in and slapping Google's wrist, I'm concerned that you aren't willing to take a hard stance on your own.
>But as a software creator, it's up to you to determine what is best for your customers.
Absolutely zero large web properties do anything based on what's best for users. If this gains traction, Google will simply deny adsense payments for impressions from an "untrusted" page, and thus all the large players that show ads for revenue will immediately implement WEI without giving a single flying shit about the users, as they always have and always will.
I think this is a little reductive. WEI is likely what some people at Google felt was best for AdSense's customers, i.e. advertisers. It just so happens that Google has a whole other set of customers who this is not best for, e.g. Chrome users, YouTube users. The problem is that it's all coming from one company, and AdSense is where the money is at, so I don't trust Google to make the best decisions for their secondary customers.
I definitely agree that AdSense blocking clients that don't implement WEI seems likely. At that point, it will be up to websites that rely on AdSense revenue to decide what to do with customers they aren't monetizing. That's already a question they have from users with ad blockers, although that is a little bit more challenging to detect.
My hope is that the majority of sites accept that they can't rely on ad revenue, and instead resort to directly monetizing users as a way to make ends meet. IMO that's a better relationship than indirectly selling their data and attention.
>My hope is that the majority of sites accept that they can't rely on ad revenue, and instead resort to directly monetizing users as a way to make ends meet.
How?
You see, this is the problem I have with all these debates where advertising is declared the villain. "Directly monetising" usually means subscriptions and logins, which means you lose all anonymity, not just gradually like under an ad targeting regime, but definitively and completely. Now payment processors and banks also get a share of the surveillance cake.
The greatest irony is that you may not even get rid of advertising. Advertising only becomes more valuable and more effective. All the newspaper subscriptions I have run ads.
The second issue is that advertising is paid for by consumers in proportion to their spending power, because a certain share of every £$€ spent is used to buy ads. Therefore, rich people fund more of our free at the point of use online services than poor people do.
If rich people move to subscriptions, this subsidy ends. Poor people will either be cut off from high quality services and relegated to their own low quality information and services (as is already the case with newspapers) or they will have to suffer through even more advertising.
Fair criticism that I used "ad revenue" as a generality, I was more specifically thinking of AdSense ads and the like. I think there are plenty of forms of advertising that are better for the relationship and less exploitative of users, such as corporate sponsorship or sponsored content ("featured" search results, brand collaborations etc.) As long as the relationship is clear when something is paid vs organic.
> Now payment processors and banks also get a share of the surveillance cake.
I agree this is a problem. I work on Bitcoin and the Lightning Network, so that's my preferred solution to the problem, but there are other approaches to addressing the poor state of privacy and payments too. I don't think that that being a problem means that the relationship we have with advertising isn't as bad though.
> If rich people move to subscriptions, this subsidy ends.
There are plenty of examples where this is not the case. The freemium model exists in places where injected advertisements are not the norm, such as free to play games. Fortnite whales subsidize millions of low income players to get a high quality game for free. Whether or not you think the relationship between Epic and its players is another question, but it's a model that can continue to exist without advertisement. Especially when free users are necessary to provide content for paying users, like posts on Twitter or Reddit, or players in a game.
Freemium, by definition, means that free users get inferior service compared to premium users. This is not the case with purely ad funded services such as Google search.
Granted, the difference between the tiers may be small engouh in some cases for this to be an acceptable compromise, but the principle is still the same.
> At that point, it will be up to websites that rely on AdSense revenue to decide what to do with customers they aren't monetizing.
Isn’t this a no brainer? Ad funded websites have zero incentive to serve pages to ad blocker users. Not only they don’t make any money from them, they cost them money.
- cost mostly marginal money
- continue to use your platform, potentially watch ads later
- their usage can be sold to anyone: where are they at a given time and what are they doing
- don't go to rival platforms
- tell their friends about the website
- etc
I visit a lot of websites that show blank pages upon seeing that I have an adblocker, so the technology to prevent serving those who have adblock is still there, I 100% of the time, saw the message to disable adblocker and just left the website.
This tech is not to prevent serving content to people who adblock, this technology is to make sure that people don't have the ability to make that choice and force certain setups that prevent adblocking
Showing some ads in an ad funded website is the admission fee. You’re not willing to pay it, which is completely fine, so just don’t go to that website. But if you keep coning back to an ad funded website because you enjoy their content, it’s only fair that you disable the ad blocker in that site.
Those sites that showed you the “disable ad blocker” pop up that prompted you to leaving won’t miss you.
Its not an admission fee. The website is hosted on a publicly accessible web server. There is no admission fee. The browser I am using serves my purposes; if I wish to strip certain elements from the page, add new ones or reformat the page any way I see fit, I can and am allowed to.
The point Google seem to be making quite clearly, is that the browser does not serve my needs, but the needs of Googles paying customers.
We're talking about ethics here, not laws nor what's technically possible.
You want to support the ad-funded website you keep coming to, yes or no? Yeah ideally every website would have a paid option for the HN crowd with cushy jobs, but that's not always feasible.
> We're talking about ethics here, not laws nor what's technically possible.
In that case, ads, being psychological manipulation to get users to do things they would not otherwise do, are already highly unethical. The ethical think to do is to discourage their use, which includes blocking them for yourself thus making them less profitable overall.
Except they don't just show you adds, they heavily track you in all kinds of disingenuous and non transparent ways.
But hey, it's great that some people want to make the devices they own and holds extremely valuable days of their own person, something controlled by external entities.
Don't worry, those of us who know our tech and value our privacy, will continue not listening to the "just take it" crowd.
And I won’t miss them, not sure the point you’re making, i am not able to know if a website has ads before visiting it, can only leave once. It has asked me to
I run an ad blocker on all of my devices (mix of uBlock or Brave browser) and I'm pretty surprised how infrequently sites ask me to disable it to access content. Not sure if your experience is different.
Any proper gate for a user to access content should be managed by a server, not on the client side. It would have to be the same for WEI. If they can detect that I'm using an adblocker, there's no reason they couldn't prevent me from accessing content by not even serving the content in the first place.
It's a cat and mouse game between the ad-blockers and ad-blocker-blockers. I imagine if WEI actually becomes a thing, WEI countermeasures will also emerge, and WEI counter-countermeasures soon after.
A better plan might be for websites to find some a better way to sustain themselves, possibly by running ads that are more relevant and less obnoxious so that users wouldn't block them.
The conflict emerges because the device is mine, not theirs. I ask for content, they send it. It seems am unfair burden to collude with os and hardware manufactures to force the user to give up general computing in order to visit an otherwise unknown web page. It would be a much better experience to go dark with content like Twitter has done than to remove general computing from the web.
Figuring out and serving the site to display the “disable Adblock” is likely more costly than to serve the content (unless its video).
That being said; creators needs money to keep making what they are making. Too bad ads is such an all encompassing method. The web is literally worse with it, but would not have been as big without it.
Why would Google not monetize unattested traffic?
I mean that's like Google blocking it's own ads from being shown.
I don't know much about the online ad market. I assume advertisers will pay more for attested impressions than for unattested ones. But unattested impressions will still be worth something.
>Why would Google not monetize unattested traffic? I mean that's like Google blocking it's own ads from being shown.
It's very simple. Google has concerns of click/impression fraud. Unattested traffic would be more likely to be fraudulent. Not paying for unattested impressions/clicks is therefore an easy way to cut costs and combat fraud.
I don't even know who you think I'm dismissing. Mine is more of a generalized ennui directed at HN as a whole.
I don't think Google has actually done anything. The bar for experimenting with new code in Chromium is pretty low. This Chicken Little reaction to a non-starter is just a result of developing in the open.
Your post implies that anyone against this attestation is just going "Google terrible no good very bad evil", because Google "caring about real impressions" is what they said they wanted.
But you can "care" about something in good and bad ways, and the criticism is not "Google bad".
> I mean that's like Google blocking it's own ads from being shown.
Chrome will happily block a Google ad if it uses too much resources, I experience this a lot with a few sites that do ad replacements in the background.
> Why would Google not monetize unattested traffic? I mean that's like Google blocking it's own ads from being shown
Because this is an incredible way of exerting their total control over the web across all browsers. If they don't like a feature, they get to downgrade the user's attestation or fail it. If it costs them some unattested traffic in order to create a permanently unassailable market position, it's worth the money.
It'll block all other search engines by preventing web scraping except those blessed by Google. For this reason alone many websites will adopt it. This will impact competition, research and freedom.
After this, all user choice is gone, and it'll only be governments who can break the racket.
If the CCP don't already do this, I expect they'll quickly implement something similar.
Because it makes payouts from advertisers more likely. If I'm advertising on Google's platform I don't want to pay for a web-scraping robot to see my ad.
Rather than not selling that space, google would later let ad buyers be aware of the parameter, and bid less for unattested views. Therefore, Google would reward sites less for such pages, and the sites would be incentivized to block you.
Are you sure about that? I am quite optimistic, it's not the first dominant-position abusing crap from Google, they also tried to impose AMP and to rank sites without it at lower positions, but AMP was ultimately fined out of existence. I am all for regulations and fining google out of existence, but I am thinking that maybe this is another product that serves to make shareholders sleep well and will not really see any significant adoption
> Choosing not to implement it means your users will sometimes be presented with a worse UX if a website's developers decide to require that feature.
I think this makes a category error. Most browser features/APIs are indeed treated as progressive enhancements by web developers, at least until an overwhelming number of the users have access to that feature. And even then, even if the developer makes assumptions that the feature/API is present, often the result is a degraded experience rather than an all-out broken experience.
The same is not true of web attestation. If a website requires it and a browser refuses to implement it, in at least some cases (probably a concerningly high number of cases though) the result will be that the user is entirely locked out of using that website.
It's also worth noting that _even if_ Vivaldi implements WEI, there's a solid chance that the attestation authority (Google, Microsoft, Apple) or possibly the website itself[1] will not accept it as a valid environment at all! After all, what makes Vivaldi not a "malicious or automated environment" in their eyes? What if Vivaldi allows full ad blocking extensions? User automation/scripting? Or any example of too much freedom to the user. Will the attestation authority decide that it is not worthy of being an acceptable environment?
[1] if this ends up spiralling out of control by allowing the full attestation chain to be inspected by the website
> The same is not true of web attestation. If a website requires it...
I don't think I've made a category error, that again is true of all browser features. If your browser does not support JavaScript or WebSockets or WebGL, many sites would lock you out of them entirely as well. It's a choice of the website creator what to assume and what to require, and how to degrade the experience or offer alternatives when a feature is missing.
The way I imagine it, WEI will start with skipping CAPTCHA. Then it will be about serving ads (users without WEI would generate no or very limited ad revenue.) Then it's up to the owner of a site whether or not they want to allow non-WEI traffic at all. Some will choose to block users without WEI, and hopefully the number of browsers that have chosen not to implement it, and the number of users on those browsers is high enough that that option will not be appealing.
I hope that Vivaldi remains one of the browsers that doesn't implement it, whether or not the EU rules against it.
>The way I imagine it, WEI will start with skipping CAPTCHA. Then it will be about serving ads (users without WEI would generate no or very limited ad revenue.) Then it's up to the owner of a site whether or not they want to allow non-WEI traffic at all. Some will choose to block users without WEI, and hopefully the number of browsers that have chosen not to implement it, and the number of users on those browsers is high enough that that option will not be appealing.
There are a number of issues with your imagined scenario. I'll address two of them. Firstly, as nvy points out[0]:
If this gains traction, Google will simply deny adsense payments for
impressions from an "untrusted" page, and thus all the large players that
show ads for revenue will immediately implement WEI without giving a single
flying shit about the users, as they always have and always will.
This is the primary reason Google wants WEI -- to make it harder for users of ad/tracking blockers to access sites they sell ads on.
The second issue is who is providing this "attestation" and what their criteria might be for "trustworthy" browsers. This will break down to a handful (Google, Microsoft, Apple and maybe Cloudflare and/or one or two others) of trusted "attestors" who will decide which browser/plugins/OS combinations are "trustworthy."
Since these folks all have a stake in walled gardens^W hellscapes, who's to say that Apple won't "attest" that any browser other than Safari on iOS or MacOS isn't trustworthy? Or Google may decide that any browser with uBlockOrigin, uMatrix or NoScript isn't trustworthy -- thus permanently deprecating ad/tracking blockers.
Since the spec doesn't specify the criteria for a "trusted" client, nor does it allow for the web site to determine for itself what constitutes the same, it's almost certain that such "trusted attestors" will penalize those who don't dance to their tune.
There are a host of other issues with WEI, especially privacy and property rights related, but those two (IMHO) are most relevant to your imaginings.
I'm not sure any of that refutes the scenario I laid out. Google denying adsense payments is exactly what I said would happen. It would then be up to the site as to whether or not they would continue to allow traffic from users who they aren't getting ad revenue from. I've been at companies who have had this exact debate about how to handle users with ad blockers.
I completely agree about the spec's vagueness about what makes a client trusted, and that attesters can choose arbitrary criteria, and will likely favor things that make the walls on their gardens higher.
I hope you're not misunderstanding my position, I think WEI is bad for users and I'm hoping that alternative browser vendors like Vivaldi take a stand to not implement it.
You're not wrong about any of this, but I have very little faith that alternative browsers not implementing this will have any sway in avoiding the lockout outcome :-(
Vivaldi's entire reason for being is "I literally cannot bring myself to just use firefox instead so I'll bend over backwards to try and remove objectionable things from chromium and still end up supporting chrome as the web default"
What sets WEI apart is that it, in a way, exerts power over your choice on how to implement other web features, for example whether you're allowed to block elements, or even just show a developer console.
Other than Encrypted Media Extensions (and these are much more constrained than WEI!), I don't know of any other web standard that does that.
Getting a secure context costs $0 and takes no effort in many common webservers at this point.
I do remember the controversy at the time of everybody shifting to HTTPS only, though, and how it might exclude small/hobbyist sites. Fortunately, we've found ways to mitigate that friction in the end. I'm much less optimistic here.
The thing is, yes it was controversial at the time to enforce HTTPS, but on the other side I 'member pwning people with ARP spoof attacks (both to steal cookies and credentials as well as simply redirecting all images to porn) at my school already way over a decade ago, and all I had was a laptop, Wireshark, Metasploit and some other piece of open source software whose name I forgot. No ARP sponge and the internet uplink was 10/10 mbit anyway so it was easy to do that shit for the entire school. A year later someone packaged all that stuff into a single software even a complete dunderhead could use to prank and steal facebook sessions at will.
Basic reality and the easiness of attacks made it impossible to stick with HTTP for much longer. And hell if I watch Scammer Payback on Youtube, I'm beginning to think it might be a good idea to disable developer tools on browsers and to only unlock them if you can prove physical, un-remoteable access to a machine, similar to Apple's SIP.
How so? I don't see how a secure attention sequence (i.e. what Windows used to do with requiring ctrl + alt + esc to be pressed to log in) could be a bad thing.
On the other hand, you can bet that that's absolutely something scammers will be able to convince people to do while they're on the phone with them...
That, or a reboot with pressing F8 with a clear prompt "Enabling developer mode, do not do so if required by a phone support". Easy enough for actual developers and tinkerers, but disruptive for someone getting scammed.
> On the other hand, you can bet that that's absolutely something scammers will be able to convince people to do while they're on the phone with them...
Indeed but it will slow them down significantly and reduce the amount of marks by a significant amount as well.
The crucial difference between the two is that I get to decide which contexts I consider insecure. For convenience I may choose to let an agent decide on my behalf.
This is fundamentally different from a world where Google gets to decide if I am a risk to them.
Since Google also controls the most popular search engine and ad network, they can exert very significant pressure on web developers by refusing to place ads or drive traffic to websites that don't comply.
I already block all ads so I'm obviously not totally sympathetic to developers who make decisions based on what will maximize ad revenue, but it still is not fair to put the burden on developers here and say "it's your choice, just say no".
2) Dart: Google wanted this to replace javascript, but Mozilla and MS both said no way, as they had no part in it. So that project ended up dying.
Google tries lots of things. Mozilla, MS, and Apple are still strong enough (especially outside the US) to push back on things that they think are a bad idea.
Dart is still around. The Flutter framework is growing in popularity.
Apple already built and shipped this same feature last year, so they're not opposed. MS? Probably gonna love this. Mozilla hasn't said anything on it (yet at least). I'm not expecting any of those players to save us.
Is that the one rendering [1] text and UI widgets into an HTML canvas element from JavaScript/Dart (completely coincidentally breaking ad blocking in the process)? What a beautiful piece of software.
> Apple already built and shipped this same feature last year,
Are you referring to Private Access Tokens (PAT)? These seem quite a bit more limited in what they do. WEI seems to specifically set out to roll back some of the blinding/anonymization aspects of PAT under the banner of debuggability/providing "feedback" to attesters.
> Is that the one rendering [1] text and UI widgets into an HTML canvas element from JavaScript/Dart (completely coincidentally breaking ad blocking in the process)? What a beautiful piece of software.
Yep. I'm not saying Dart is a good thing - I've never used it and don't currently have plans too. All I'm saying is that it is NOT dead as GP asserted.
> Are you referring to Private Access Tokens (PAT)? These seem quite a bit more limited in what they do. WEI seems to specifically set out to roll back some of the blinding/anonymization aspects of PAT under the banner of debuggability/providing "feedback" to attesters.
Yes. PATs don't provide as much information about the attestation to the website, but they do provide the critical part which is "is this person using a blessed client." That's plenty for a website to block people on.
You need to look back at the history of Dart. It was created by the Chrome team, with many of the people who worked on GWT taking part on it. It was created to solve Google's issues with JavaScript. This endeavor failed as no other browser makers picked it up.
We then got AngularJs, but with Dart (AngularDart). This was again trying to improve the coding experience of making web apps.
When typescript came and the Angular team picked that up, TS seems to be the primary path forward (though angulardart is still getting updated).
At this point dart wasn't seeing a lot of attention. The Flutter team was able to pick up Dart as the primary owner and has been driving it since then.
Someone argued yesterday that in instances like this users are choosing what to use of their own free will. At the micro scale sure, at the macro scale I disagree. Users want their shit to work and if you play these shenanigans it's less of a choice and more of a ransom.
Insects in a swarm can choose where to go but they can't choose where the swarm goes.
> If your only hope of not going along with this is having the EU come in and slapping Google's wrist, I'm concerned that you aren't willing to take a hard stance on your own.
I take umbridge at this implication. When a monopoly like Google takes anti-competitive actions it's not fair or just to expect individuals to stand up to it. Governments exist to counter anti-competitive behavior like this and governments have been doing a terrible job chopping down companies with too much vertical integration lately.
The author should have asked "Can we just implement it then?" because in some cases you literally can't implement the proposed API. That's the core issue with it. Unlike other contentious browser features, even if you wanted to implement attestation, it may be impossible to do so. More precisely, attestation may be impossible to implement on some platforms to the de facto standard that would develop over time. The de facto standard I refer to is the list of attestors web servers will accept. If your platform can't be attested by an approved attestor, you're screwed. That's why it's not that simple this time. The proposed attestation API is literally unimplementable in general. You can't implement it and you can't not implement it.
Well hold on. The problem with attestation is you're damned if you do and damned if you don't.
If you use a browser which supports attestation you will be denied service by companies who disapprove of what you run on your computer.
If you don't use a browser which supports attestation you will be denied service by companies who disapprove of what you run on your computer.
So everyone loses. If this goes live everyone in the world loses.
It is an utterly heinous proposal. It is perhaps the worst thing Google has ever produced. I use Firefox and will never use any browser that implements attestation, even if I have to stop using most of the WWW one day.
But unfortunately individual action is not going to be enough here, because no matter what you do, you lose.
> If your only hope of not going along with this is having the EU come in and slapping Google's wrist, I'm concerned that you aren't willing to take a hard stance on your own.
This is indeed concerning. I'd like to see Brave's response to this, and we already know how Firefox has responded.
This point in the blog post saddens me. Chrome's market share is huge, but Chrome is not ubiquitous. There was public outcry when Google was suspected of making youtube have "bugs" on non-Chromium browsers - having them just straight up disable services for more than a third of users would result in an actual shitstorm, more than any of us could hope to drum up with an explanation of why this change is bad.
It would also drive the point home to the very same legislators that the author is deferring to.
If browsers now start pre-emptively folding, Google just straight up won. It's great that the Vivaldi team is against this change, but a blog post and hoping for regulation just won't cut it. You have actual leverage here, use it.
Working on chromium codebase maintaining minimal fork is very hard. It requires a very competent and quite a large team (because lots of work to resolve merge conflicts) to just regularily apply a custom set of patches to newly released chromium versions.
It's like staying on a dancing elephant. And it requires MONEY. Lots of.
I suspect this is the desired result of Google to protect chromium despite it's opensource.
I think that just meant some users with sufficient karma flagged it, but I was a bit confused because for a while it didn't say "[flagged]" but didn't show up in the first several pages or continue to get upvotes. Is there a delay in saying "[flagged]"?
> Any browser choosing not to implement this would not be trusted and any website choosing to use this API could therefore reject users from those browsers.
If we are serious about protesting this, let’s do as follows: We implement code in our websites that checks whether the user agent implements this API. If the check passes, we tell the user that their browser is not welcome and why that is.
> let’s do as follows: We implement code in our websites that checks whether the user agent implements this API. If the check passes, we tell the user that their browser is not welcome and why that is.
I am sympathetic, I agree let's all do that....
...I cannot imagine any of the money people I work with agreeing
As usual, a thousand word essay on Google's WEI without ever mentioning that Apple sailed that ship silently a while ago, therefore not attracting any attention or backlash.
I didn't notice it because I, just like a majority of internet users worldwide, do not own any Apple products and therefore I was never affected and probably never will be.
I do, however, routinely interact with websites that implement Google Analytics and/or Google ads. If those sites start rejecting my browser of choice I will most certainly be locked out of a significant portion of the internet. And the remaining 60% of all internet users would be essentially forced to accept this technology or else. That's an order of magnitude or two more users, and seems to me like a good reason to raise the alarm.
Exactly. Websites will not require this version because they know that Safari is a minority market share and they can't force users to buy an Apple product. However if this is supported by Chrome and Safari all of a sudden the equation flips and many sites will feel that they can reject service to other users.
Safari is not only leading browser in mobile, it is the only choice any iphone users have unlike chrome where user has choice to not use it. I would be more wary of safari changes than chrome changes.
No it's not? Android has upwards of 70% of the mobile market[0], and Chrome has nearly 65% of the mobile browser market, compared to Safari with under 25%.[1]
> the only choice any iphone users have
Sort of. WebKit is the only choice iOS users have, but there are plenty of browsers available on iOS (including Chrome and Firefox) that use WebKit, not just Safari.
WEI can't lock people out of sites either. It's all on the website owner. A site owner could easily lock Apple users who aren't authed via PAT today if they wanted to. The only thing that's stopped them from doing so already is that most users are non-Apple browsers so it wouldn't make sense.
> It is also interesting to note that the first use case listed is about ensuring that interactions with ads are genuine.
That's just the beginning. Attestation will eventually allow advertisers to demand that user is present and looking at the screen like in Black Mirror episode Fifteen Million Merits.
> In it, TV viewers are only able to skip an advert by shouting the name of the brand. Yep, crying 'McDonald's!' is the only way to make the Big Mac disappear.
Companies will do the most insane, terrible things if not stopped. This will happen.
Not sure that it is in Sony's case. But creating patents for anti-user ideas, where you don't intend on using them, locks the idea away for at least a while, and could be seen as pro-user.
Can't wait till we've added another turtle to the stack with a full browser engine implemented in WASM running in a host browser that is mandatory for all media sites.
I agree with the first. The second I think is missing the target. This really doesn't have anything to do with search. Instead this is Google (The largest ad seller) using it's market position (as the maker of Chrome/Chromium, the most popular browser) to prevent users from not seeing its ads on any website where they're displayed.
While I believe that the idea of splitting Search and Ads could be a game changer, how would Search become profitable without Ads, and without compromising the rank algorithm?
It's never going to be against the economic interest of search engines to show ads, they can sell spots on their front page which are always going to be valuable.
This should be against their tactical interests, because it hurts their accuracy driving away users, but absent a significantly more accurate competitor they'll get away with it for a long time.
Regarding Google search there are some hopeful signs. For one some people report Google's accuracy dropping, and Google keeps switching up its idiosyncrasies to avoid spam but in doing so they devalue the effort people put into SEO and into refining their Google-fu. These might be the same thing however.
I can see this show up on Youtube (why not - under Google's control, and they want you to watch the ads on their official browser) and on banking apps. Initially. In the longer run, it either withers and dies, or it leads to antitrust action. I really can't see another way.
This will probably be implemented by every streaming service very quickly to try to prevent piracy (which won't work), and will only end up harming people who just want to watch on more freedom-respecting browsers or operating systems
I'm not certain what you mean by "managing your own channel" in particular, but I use Chatterino[0] and find it to be a solid pairing with streamlink + VLC as far as just chat goes. It's also pretty straightforward to build from source which is always nice to me at least.
The login made a zero-payload POST to https://passport.twitch.tv/integrity and it responded with 400 and a JSON body {"error_code": 5025, "error_description": "integrity failed", "error": "Oops! We encountered an unexpected error. Please try again.", ...}.
It seems that this is not about GNU/Linux, though, as it happens at random (searches for `twitch "integrity failed"` produce results from all sort of platforms and browsers). Must be that some pointy haired boss had some important ideas about security.
I was able to log in from a Firefox on a different GNU/Linux system, so it's not like those are always blocked. I suspect there's some User-Agent whitelist or similar kind of nonsense (but looking at the console logs and bunch of WebGL errors it certainly tries to fingerprint the system), but I'm too lazy to investigate this any further.
g**gle and other PRISM partners do not want any users on freedom-respecting browsers/OSes. forcing people onto chromium based browsers isn't an unfortunate side effect, it's a secondary goal of the specification.
Actually, absent a full chain-of-trust from boot, which I believe Android/iOS do provide, and possibly the proprietary desktop environments can provide, it should be possible to fake the "I'm a legitimate browser" exchange. Which is what the 1% that care will do. But it sucks to have to go to deep underground "crack" type stuff where before there was an open web. Not to mention the risk of getting hit by the banhammer if detected.
Banks are not the target of this. If Banks do something that inhibits people with disabilities, corporate account managers with disabilities, or senior citizens, they will get skewered. They will tread carefully.
How exactly is WEI any worse than say a peep-hole on a door? At the end of the day bots are a huge problem and it's only getting worse. What's the alternative solution? You need to know who you're dealing with, both in life and clearly on the web.
I'm probably alone in this, but WEI is a good thing. Anyone who's run a site knows the headache around bots. Sites that don't care about bots can simply not use WEI. Of course, we know they will use it, because bots are a headache. Millions of engineer hours are wasted yearly on bot nonsense.
With the improvements in AI this was inevitable anyway. Anyone who thinks otherwise is delusional. Reap what you sow and what not.
edit: removing ssl comparison since it's not really my point to begin with
SSL is in practice only used for server certificates. It was kinda shit and a lot of people complained because of CAs but then we got let’s encrypt etc which alleviated the situation. And the identity is only tied to domain control, unlike eg code signing certs which are orders of magnitude more invasive and frankly a racket.
In either case, WEI has the potential to be proper DRM, like in the “approved devices” fashion. It’s deeply invasive, and can be used to exclude any type of usage at the whim of mega corps, like screen readers, ad blocking, anti-tracking/fingerprinting, downloading copyrighted content, and anything new they can think of in the future. It’s quite literally the gateway to making the web an App Store (or at best, multiple app stores).
> What's the alternative solution?
To what problem? Bots specifically or humans who want to use the web in any way they want?
If bots, then elaborate. Many bots are good, and ironically the vast majority of bot traffic comes from the very corporations that are behind this stuff. As for the really bad bots, we have IP blocklisting. For the gray/manipulative bots, sure, that’s a problem. What makes you think that problem needs to be addressed with mandatory handcuffs for everyone else?
Why should sites be obligated to let anyone in? Do you let anyone into your house? I'm surprised WEI wasn't implemented long ago.
This notion of destroying the open web is so nonsensical. WEI is not obligatory. If it's being implemented it's because it solves a real problem. Think about it. There will still be sites that don't use it.
People's real issue is that the big sites will use WEI because the problem it solves is legitimate but they don't want to identify themselves, which makes sense, but they were never obligated to let you visit their site to begin with.
The issue is not that all websites should let anyone in, it's that Google often controls the entire stack of website, ad network, browser, operating system, and mobile device. So Google can use this to pressure web users into using Google products that they otherwise would not have used, without providing any benefits. You can't use Google Search without attesting that you're browsing with unmodified Chrome on unmodified Android on an unmodified Pixel, for example. Or, an independent website can't run Google Ads unless it verifies all users are visiting using approved Google web environments.
If it were impossible for a company to have such a high market share in all of these areas at once, this proposal would be much less concerning.
But how is this different than Google or any other company provide their services only through native apps? They can choose today to cut anyone who is not using native app and they are choosing not to do so.
The web is flooded with people complaining that their google accounts were terminated for seemingly arbitrary or random reasons. They are choosing to do so.
A good number of those companies only provide native applications for mobile, and web applications for desktop (Google Docs/Sheets is a good example). If forced to use only native apps, you'd be locking people using desktops out for the most part.
They're not. Depending on your competency, you have a _ton_ of tools at your disposal for filtering traffic ranging from basic throttle to sophisticated behavior/request profiling.
I've spent more than a little bit of my career dealing with bots and I'm not really sure that a cryptographically signed blob proving that the request came from $thisSpecificVersion of firefox running on $thisExactVersion of osx is really going to help me.
I don't care _what_ made the request because that can always be spoofed; this cat and mouse game always ends at the analog loop hole. I care about what the request(s) are trying to do and that is something that I figure out with just the data I have server side.
>I've spent more than a little bit of my career dealing with bots and I'm not really sure that a cryptographically signed blob proving that the request came from $thisSpecificVersion of firefox running on $thisExactVersion of osx is really going to help me.
It'll end DDOS by botnet. Compromised computers would (presumably) have to run a full browser. That's much more computationally expensive and (presumably) the user would see it running.
And the flaw here is that the proposal doesn't do enough. If that signed blob allowed you to uniquely ID the device it would help solve a lot more problems. That would end DDOS for the most part and make managing abuse a lot easier.
> And the flaw here is that the proposal doesn't do enough. If that signed blob allowed you to uniquely ID the device it would help solve a lot more problems.
This is more or less what the proposal does? It's akin to the same shady stuff seen here [1] except this time some third party gets to sign it.
> That would end DDOS for the most part and make managing abuse a lot easier.
Not every bot that I'm defending against is a DDoS but I can probably figure out a way to overwhelm the "pre-content" filter that's trying to figure out if a token is legit or not.
>Not every bot that I'm defending against is a DDoS but I can probably figure out a way to overwhelm the "pre-content" filter that's trying to figure out if a token is legit or not.
That's true of every DDOS filter. It doesn't mean that having a cryptographically secure way to make requests more expensive to produce isn't a tremendous help.
>This is more or less what the proposal does? It's akin to the same shady stuff seen here [1] except this time some third party gets to sign it.
The fingerprint isn't unique to the extent that you can rely on it always correctly identifying a single user. So you can't ban based on the fingerprint or automatically log someone in.
> It doesn't mean that having a cryptographically secure way to make requests more expensive to produce isn't a tremendous help.
A malicious actor wouldn't bother. They'll tap `/dev/random` when it comes time to send the blessed token to the origin. The onus is going to be on the origin to figure out that it's _not_ a valid/signed token. If it's as easy for the origin to do this as it is for the adversary to tap a RNG then what was the point? If it's harder for the origin to figure out my token isn't legit than it was for me to generate one, how is the origin better off?
In any case, you're filtering the DDOS out *after* you've managed to set up the TCP/TLS/HTTP connection. That seems to be a rather late/expensive point to do so!
>If it's as easy for the origin to do this as it is for the adversary to tap a RNG then what was the point? If it's harder for the origin to figure out my token isn't legit than it was for me to generate one, how is the origin better off?
Because it's less computational intense than serving responses and/or trying to fingerprint malicious actors. It also tells you with near certainty that the request is malicious and future requests from that IP can be blocked.
> It also tells you with near certainty that the request is malicious and future requests from that IP can be blocked.
So I can still use this to DDOS. My malware running somewhere on your network just needs to submit a bogus request from your IP address. Origin sees the bogus requests from your IP and now that IP is on the bad list. Later - your legit requests from the same IP are ... denied.
I don't know that an "inverse" DDOS is novel, but it's certainly not been common. Perhaps that may change in the future...
It will help a lot of services like Cloudflare, basically stopping most of spamming/ddos on sites behind it. Big cloud vendors will probably implement similar solutions in which you will be able to only allow attested traffic to your site.
> Why should sites be obligated to let anyone in? […] This notion of destroying the open web is so nonsensical.
I mean.. I think you’re answering your own question here.
You can argue that the web shouldn’t be open. In fact, there are many arguments for that, which I don’t mind arguing against.
There are many things that do not belong on the web, precisely because it’s open. For instance, a registry of people’s political views. Or naked pictures you don’t want the world to retain forever. And so on. The fact that the (open) web is not suitable for everything has been true since its inception. Openness comes with trade offs.
The honest way of putting it, is that WEI wants to make the web less open so that it can have more content, or protect content better.
On an opt-in basis, this is fine in theory. But WEI would never ever be opt-in with meaningful consent. It’s entirely dead in the water there, because non techies will not understand what or why this is “needed”. Heck, people don’t even grok cookies. In practice, this will be enabled by default, which is exactly the fear. Alt browsers would be bullied to support it, and users would be forced to use it.
I want to build a business in the near future that will need to scrape large bits of the web, but I probably will not be able to with this in place. I wouldn't mind so much if I didn't have a sneaking suspicion that the company pushing this won't have any problem running their business that depends on scraping a large part of the web.
I don't let anyone in my house, therefore what? Therefore I am joining a worldwide program whereby I am able to find out from a source I choose whether I want to let this person into my house. If they don't make their information available to my trusted source, they ain't getting in.
Also my house happens to contain things that billions of people want to see and use, but they have to sit through my time share pitch first. And they HAVE to listen.
> If it's being implemented it's because it solves a real problem.
If something solves a real problem, must it then be implemented?
Also, it solves a problem for web sites, and in such a way that non-malicious users will be less free to use the web the way they want.
And if Google decides later that adsense from untrusted browsers doesn't count? Youtube and gmaol refuses to run at all on untrusted browsers? Is it still optional, realisticly?
WEI is really about denying the user full control of their own device. If you give people full control of their devices, you will have bots. Do you believe eliminating bots is more important than general purpose computing?
A bot is just some computer doing what its owner wants. OP is happy because WEI will eliminate bots. OP is inconvenienced by other people using computers in ways they don't like, and wants to take control of the computer away.
As strong AI is knocking on the door, we see people wanting to take general purpose computing away. All the worst outcomes involve people losing the ability to control their own computers.
> WEI is really about denying the user full control of their own device. If you give people full control of their devices, you will have bots. Do you believe eliminating bots is more important than general purpose computing?
Worse than that -- unless you disallow any sort of scripting and accessibility hooks, WEI doesn't prevent malicious requests. It just forces you to script your system via autohotkey or its equivalent.
People used browser APIs and some other people thought to take that away. When some people use autohotkey, what will the other people think about doing?
why not? how do you want to solve the problem of provenance? if you feel it's not a problem to begin with, then the sites in question can simply choose not to enable it. if they enable and believe it is a problem, then clearly there's a dissonance between the places you choose to visit and their goals, no?
> sites in question can simply choose not to enable it.
My problem isn't that I as a developer don't have an option to not implement attestation checks on my own web properties. I already know that (and definitely won't be implementing them).
My problem is that a huge number of websites will, ostensibly as an easier way to prevent malicious automation, spam etc, but in doing so will throw the baby out with the bathwater: That users will no longer have OS and browser choice because the web shackles them to approved, signed, and sealed hardware/software combinations primarily controlled by big tech.
WEI does not solve any "problem of provenance"; it's DRM for the web. It asserts things about the browser environment to the website operator, not the other way around.
Are you sure you actually understand these two technologies (WEI and TLS) sufficiently to make these claims?
The problem of provenance is significantly smaller than the problem of monopolistic companies given control over who is and is not an approved user of the web.
Provenance to the extent it is a problem is already handleable and largely handled. Note that "handled" here does not mean it is 100% gone, only that it is contained. Monopolistic control over the web is not containable.
Under capitalism (or really any socio-economic system) we engage with services for reasons other than choice all the time. For example, if you're living in an area where just one or two banks exist, and both of them suddenly decide to force DRM because their cyber insurance company told them to, you can suddenly no longer access their sites on Linux. That's pretty fucked up.
The people who want to use DRM to solve their problems should just suck it up and find alternatives.
> then the sites in question can simply choose not to enable it
Google can reduce the page rank of websites that dont enable it (or just not give any page rank at all) and now everyone who wants to be found has to enable it
That would clearly be an antitrust violation or deceptive business practice in one or more countries. Though by the time they get penalized for it, the damage would have been done.
Google can already do this if they want to. For example, they could increase the page rank of sites use Google Analytics (or any other Google client library). But this would be exceedingly stupid because it would compromise the quality of their search results, and remaining the leader in search should be their highest priority.
I am. I've had apps try to use Google Safetynet to prevent me from running them on my phone (which is not running the manufacturer-provided Android build), and I am certainly opposed to that.
I wouldn't mind being able to use the TPM to tell me whether the hardware and software are what I expected them to be, but that's different.
What do you get from blasting this thread with a bunch of naive one liners that you could answer yourself if you studied the topic on your own for a little bit?
The answer to this one is that the fundamental problem that current TPMs aim to "solve" is that of allowing corporate control and inspection of end users' computers. To continue having a free society where individuals have some autonomy over the devices they purportedly own, this needs to be soundly rejected.
Good idea, we just throw out all the security mechanisms to avoid "corporate control" and even worse anti virus software "inspecting end users' computers".
I'm sure people will be very happy about all the mal- and ransomware they receive. Imagine the utopia we would live in.
You're using scare quotes, but I do specifically mean corporate control. Current TPMs were designed around giving centralized parties (eg corporations) privileged keys. TPMs could certainly be designed to not have any baked in privileged keys, instead putting the owner at the trust root. The current crop just wasn't.
Also that you're talking about anti virus shows that you're not really in touch with the gamut of computing. From my perspective, anti virus was something that was relevant two decades ago.
Why are you proposing some sort of reverse slippery slope? So because "we" don't oppose a TPM, we shouldn't oppose any form of attestation?
If anything you are just proving the point of the most paranoid.
I don't even have a strong opinion on this but it's so weird to see this argument over and over. It's just calling for even an even more extreme reaction to any effort that goes in this direction, just in case it's used to justify a push for even worse stuff down the line.
Yes, TPMs have no business being part of the open web. They enable CIOs to make bad decisions like preventing a bank's website from being loaded in non-TPM browsers.
Fundamentally both give a 3rd party the authority to verify the legitimacy of something, and similarly both can be avoided if you're willing to not participate.
One provides encryption over the wire (TLS), but in modern implementations (extended validation certs are more or less dead in the browser space) hardly provides the user any guarantee that the website is who they think it is.
The other provides the website the ability to ensure that the user's device is one of an approved set of devices, with an approved set of operating system builds, with an approved set of browsers.
These are fundamentally different, surely you can see that.
> similarly both can be avoided if you're willing to not participate.
Actually, no. Unless your definition of "avoided" is simply not using a website which requires attestation, which, over time, could become most of them
I think you're mistaken about what TLS does. It doesn't give a third party any authority to verify anything. It provides integrity and confidentiality to both parties to an HTTP exchange, nothing more.
A TLS client does not contain any trusted private key. You can write one yourself by reading the RFCs. The same is not true for WEI.
TLS used to also guarantee that you were talking to the correct entity, that's what EV certificates are for. So there was a verification step that ensured that you were indeed the business/organization you were claiming to be.
The EV certs still exists, but the browsers don't really differenciate between DV and EV certs anymore.
Ah, yes, in that sense I can see the parallel (in that being reachable in modern browsers is contingent on being able to obtain a TLS certificate). I remember similar concerns being raised about browsers discouraging HTTP.
But TLS certificates solve a much narrower problem than WEI ("are you communicating with the site you think you are") and are widely and cheaply available from multiple organizationally independent certificate authorities.
In particular, TLS certificates don't try to make an assertion about the website visited, i.e. "this site is operated by honest people, not scammers". WEI does, with the assertion being something like "this browser will not allow injecting scripts or blocking elements".
TLS doesn't verify that particular software or hardware is on the other side; one could design a custom CPU on an FPGA, write their own TLS stack for it, and be able to connect to any TLS-using site as usual without needing to get those things approved.
Even taking your (really flawed) comparison, there's a huge difference. With TLS the servers (the ones being attested) can trivially avoid tls if they so want - web browsers still support http, after all.
In WEI, the users (the ones being attested) _cannot_ avoid WEI. If a website decides to not allow an unattested user, they can simply decide to refuse access.
Anyone using a browser without this feature will end up becoming second class citizens who must jump through (extreme) hoops to use the web...
Or they're just walled off from most of the web entirely.
I use a variety of personally developed web scraper scripts. For instance, I have digital copies of every paystub. These will almost all become worthless. My retirement plan at a previous employer would not let me download monthly statements unless I did it manually... it was able to detect the Mechanize library, and responded with some creepy-assed warning against robots.
No one would go to the trouble to do that manually every month, and no one was allowed robots apparently. But at least they needed to install some specialty software somewhere to disallow it. This shit will just make it even easier for the assholes.
I also worry about tools I sometimes use for things like Selenium.
To be fair it's only a matter of time until CV and NNs replace Webdriver/Selenium as the goto for scraping. First using accessibility APIs and later on imagine something you plug into USB C that emulates DisplayPort and HID devices.
> To be fair it's only a matter of time until CV and NNs replace Webdriver/Selenium as the goto for scraping. First using accessibility APIs and later on imagine something you plug into USB C that emulates DisplayPort and HID devices.
*exactly*. The analog loophole is where this cat/mouse game must end. Since we already know how it'll play out, can't we invest our time into more useful endeavors?
This is not true. Sites will not be obligated to implement WEI. At the end of the day bots are a real issue, with no real solution other than attestation. AI is accelerating this issue. This (WEI or something else) is inevitable.
> This is not true. Sites will not be obligated to implement WEI.
There are a number of sites I frequent but don't log in to or register for an account.
Every single one of them has an absurd number of captchas, or I see the cloudflare protection thing come up for first for 3 seconds.
So while hypothetically it may be true that they don't have to do it, they will. It's not even clear to me that Firefox could implement it too... so do I have to switch back to Chrome (or [barf] Safari?)? Dunno. I can't predict the future, but you'd have to be in some sort of denial to not see where this is going.
> At the end of the day bots are a real issue
Bots are fucking awesome. We should all have bots, out there doing the boring stuff, bringing back the goodies to us. If someone tells you that bots are bad, they're lying to you because they're afraid that you might find out how much you'd want one.
Maybe so, but if so, let's please make it something else.
I'm fine with attestation when it comes to high-risk tasks such as confirming financial transactions or signing legal documents, or anonymous "proof-of-humanity" solutions such as Apple's Private Access Tokens (as long as there's a CAPTCHA-based or similar alternative!) for free trials or account creations (beats using SMS/phone number authentication, at least), but applying Trusted Computing to the entire browser just goes much too far.
With the rate AI is accelerating, it's possible that nothing akin to a CAPTCHA may be viable soon. That sort of verification is already approaching the threshold of what's reasonable to ask humans to solve.
SSL is the client verifying the server, and the client can thusly opt to skip or alter that in any way it sees fit. WEI is the reverse: the server validating the client, so the client has no choice to opt-out.
TLS does not facilitate preventing you as a web site visitor from inspecting or modifying the web content served over it, e.g. by blocking ads or auto-playing videos. WEI does.
WEI doesn't prevent bots. Bots would just need to script an attested browser via tools like AutoHotKey -- the only way WEI would prevent bots would be by preventing you from running the browser on an operating system without 3rd party software installed. WEI is a 2 or 3 month roadbump for bot builders.
> the only way WEI would prevent bots would be by preventing you from running the browser on an operating system without 3rd party software installed
Or by isolating the browser from third party software. Android does not let applications mess with each other. Windows already prevents non-elevated applications from touching elevated applications (i.e. running as administrator).
What makes you think that Windows won't add an "untouchable" mode to executables belonging to "approved" browsers? The kernel is already locked down so you won't be able to bypass it that easily.
It's absolutely possible to automate an "attested" phone with a physical robot finger, which is neither extremely complicated or expensive tech. It might be too expensive for mass-scale DDOS, but it would still be cost-effective for more interesting things like click fraud.
How would this prevent bots? It's very easy to set up a bot that's running Chrome on Android, or whatever environment is required. Bots can do whatever you tell them without complaining. This only prevents actual humans who want to use a non-mainstream browser, or use add-ons to help them browse, or use a non-mainstream operating system or device.
WEI is like requiring people to get their brain scanned before you let them visit your house. "Sorry, I require a valid attestation from Google that you are a real human," you say. Your friend now needs to drive down to the local Google® Privacy Invasion Center™ and have all of their personal secrets exposed so Google can prove they are, in fact, not a robot. Except, oh no, Google found Linux in their brain scan! The horror! How dare they value their own freedom! Anyone who opposes spying from Chrome and/or Google Play Services is obviously a bot. Nothing to hide, nothing to fear, right? Your visitor, who is clearly not a bot, fails to obtain a valid attestation from Google. You deny them entry to your house.
This all seems to me that in a decade we'll be having the same discussion, with the same excuse, but eventually the proposal from big corporations will be to require plugging-in a government-issued ID card into a smartcard reader in order to access pre-approved websites with pre-approved client portals running in pre-approved machines.
I think your comparison to SSL is actually important, because encryption is a discrete problem with a discrete solution. But this WEI proposal is designed to detect botting, which is a cat and mouse problem without a clear end game.
Exactly, if people want to create bots, at the end of the days we'll end up with VMs running AutoHotkey and Chrome, or physical machines with fake mice and keyboards, or actual computer setups with robot arms moving the mouse around, there's no stopping bots
Well, not if you ultimately tie something like WEI to hardware attestation. Then fraudsters would have to buy additional devices, which is not a complete deterrent [1], but would change the economics significantly.
But many here are (in my view rightly) arguing that this would be too high a price to pay for bot/spam protection, since it would almost inevitably cement the browser, OS, and device monoculture even further.
There's a lot of moral outrage regarding this proposal, rightfully so. In fact, it should be further intensified. But apart from that, I don't think this proposal will work in any case.
When implemented without holdouts (closed loop), you do have a tight DRM web, which will attract legislators. Or so we hope.
When implemented with holdouts, it's barely useful to websites since they still need the backup mechanisms to detect fraud that they have anyway. If they need to keep it around, might as well use that as singular solution which has the added "benefit" of collecting way more personal data.
I wonder if this will prod the Ladybird development team to make binaries available for non-savvy end users. Having an additional open-source browser would help.
Note that this doesn't even prevent people from using tools like AutoHotKey or their moral equivalents to make malicious requests from browsers.
It only makes it impossible for legitimate users to run their own code -- people who want to run OpenBSD, or fork Chrome to make sure that ManifestV3 doesn't permanently hobble adblockers, or maintain their own alternative browser UI.
Third part attestation is a show stopper for openness. I'm not a fan, and this does not solve any problems I face with the software make or that my users have accessing it.
It's insecure because someone on path (or actually off-path but harder) could replace the contents of your website with whatever they want, including taking payments "on your behalf" and then just pocketing them. The main original point of HTTPS, and why I assume it does not use starttls or similar, is so people in the late 1990s and early 2000s could figure out what websites they were allowed to put their credit card numbers into.
> Why does my http site, which has nothing important on it at all, get flagged by chrome as "insecure"?
Because an attacker can inject JavaScript code on it, and use it to attack other sites. The most famous example of that is "Great Cannon", which used a MITM attack on http sites to inject JavaScript code which did a distributed denial of service attack on GitHub. Other possibilities include injecting code which uses a browser vulnerability to install malware on the computer of whoever accesses your site (a "watering hole" attack), without having to invade your site first.
The usual argument is that vanilla HTTP makes it possible for a man-in-the-middle (your ISP, presumably?) to tamper with data payloads before they're delivered.
Requiring HTTPS means you require clients to have up-to-date TLS certificates and implementations. This provides a ratchet that slowly makes it harder and harder to use old computers and old software to access the web. Forced obsolescence and churn is highly desirable for anybody who controls the new standards, including Google.
You can run TLS stacks that work with modern websites on old devices, it's just not really that secure, see https://www.dialup.net/wingpt/tls.html for running "Modern TLS/SSL on 16-bit Windows"
I agree that extending trusted platform trust all the way up into web APIs is gross — it would be fine if the TPA club was wide open to anyone building their own OS, but that clearly will never happen and only the corporate-aligned cabal will ever be trusted, and all the free/open OSs will never be allowed to join.
But… is there scope for the attestor in WEI to be a third party site that does a super fancy “click on all the stop lights / stairs / boats” captcha, and then repurposes that captcha result for every other site? That doesn’t sound like an awful service to add to the web. It would mean each individual site no longer had to do their own captcha.
(Probably impossible without third party cookies. But then that kind of implies that if WEI does make it possible then it could be shown to provide a tracking service equivalent to third party cookies? Again, gross.)
I agree, I think a third party attribution service makes a lot of sense, similar to how https has trusted CAs there could be different trusted attributors that can verify that a user has some account with some kind of verification, and these pluggable attributors could then be trusted by sites. You'd still need to integrate with a trusted authenticator, which some people might find objectionable, but it's probably better than the current proposal in that regard.
This of course only covers half of the use cases discussed (the half about preventing bots, not to say anything about the more DRM-ey aspects).
What's the potential for this to enable mandatory remote attestation that your personal machine is running For-Your-Own-Good™ spying software in order to use any significant services (banking, etc)?
Are you aware of any websites that have tried to implement payments, but failed or chose not to because they couldn't verify which users have paid? It's an incredibly easy problem to solve without WEI.
I genuinely am not understanding why you think WEI would make this easier. You have one central place that you log in to set up this payment system, I guess Google in this case, and then other sites check with the central authority to see if you're a paying user. They can use this attestation or a cookie or a Log In With Google button, what's the difference? Either way when you browse from a new device you'll have to log into the payment system again.
It provides a uniform service for ensuring a client has desired properties.
That’s kinda tricky to do well. Traffic for monitoring, you can do with a jwt, but like, enabling chunked transfer in python request lib is a problem you discover. An array of attestors could guarantee feature sets.
There's nothing about payments that requires testing client properties though. What you want is the ability to test if there's a corresponding payment, that has nothing really to do with the client's device. It just seems like irrelevant information, what are these "desired properties"?
You want a corresponding token with the request that matches a payment. And WEI seems like a strictly inferior way to get that instead of just... asking a payment provider for the token. What does my hardware/OS/browser have to do with a payment token?
I'm not understanding how giving the client a token that you put in a request header that proves you've paid, or is just an account lookup token to then ask a payment processor whether or not their account is in good standing, is limited in a way that WEI makes better. I don't see any use cases that wouldn't work that way that would now work with WEI.
>It provides a uniform service for ensuring a client has desired properties.
I see that as a downside, not a benefit -- who decides whether or not a client (i.e., my software running on my hardware) has those "desired properties" and what might those properties be?
It will, but curl and others will likely simply be upgraded with a puppeteer of sorts that plugs into your chrome runtime. So this will have prevented nothing (except force not technical users to adopt chrome and thus kill all new browser incumbents, offering the chance to force feed even more google ads)
The attestation API will allow websites to verify certain things about the user agent which they then may use to either deny access or alter the access for the requested resource. This is similar to existing methods of checking the "User-Agent" header string but is much more robust to tampering because it can rely on a full-chain of trust from the owning website.
So will existing tools work with this?
Websites that do not require attestation should work fine. This will probably be the vast majority of websites.
Websites that require attestation may or may not work depending on the results of the attestation. Since programs like curl do not currently provide a mechanism to perform attestation, they will indicate a failure. If the website is configured to disallow failed attestation attempts, then tools like curl will no longer be able to access the same resources that user agents that pass attestation can.
My opinion is that it is likely that attestation will be used for any website where there is a large media presence (copyright/drm), large data presence (resource utilization/streams), high security, or any large company that is willing to completely segment its web resources into attested and non-attested versions. Tools like curl will no longer work with these sites until either a suitable attestation system is added to them, or the company changes its attestation policy.
The whole point of WEI is that the site can choose to block any combination of browser and OS they see fit, in a reliable way (currently, browsers can freely lie). CURL and friends will almost immediately be branded as bots and banned - that's the stated objective.
It is more severe than that. The design favors a whitelist approach: Only browsers that can get the attestation from a "trusted source" are allowed. Browsers that cannot, don't.
How about adding a fair rule to standard, that attester cannot attest their own products? I wonder how long would it take for Microsoft or Apple to attest google.com as trustworthy website?
Well, it's making me finally kick my Chrome habit. My work machine runs Firefox and it's fine, but my personal stuff is all on Chrome because it's also my password management, etc. etc.
I tried once before, when I quit working at Google and was trying to de-Google a bunch, and I never succeeded.
I plan to move everything over over the next few days. Wish me luck!
The Internet in general, programmers especially, and the Web community especially especially owe Google a massive debt of gratitude for all they’ve done over the years.
But this one’s simple: “literally go fuck yourself with this. we will fight you tooth and fucking nail every fucking angstrom on this one. it’s a bridge too far.”.
I wanted to write some proper feedback on the GitHub repo, but they've closed issues and PRs. Until they open it back up again, here are my thoughts on the spec:
- Mozilla is already publicly and officially opposed (https://github.com/mozilla/standards-positions/issues/852#is...), on principle ("Any browser, server, or publisher that implements common standards is automatically part of the Web") as well as on technical concerns around the safeguards and downsides of the proposal.
"We have Private Access Tokens (aka Privacy Pass) for some of the claimed use cases of this spec. We think it's a more privacy-respecting solution. The Explainer isn't very clear on why specifically Web Environment Integrity is better. It mentions a feedback mechanism, but not the specific mechanism. It also exposes more info to the page. The Explainer claims this spec is necessary because Privacy Access Tokens don't support feedback from websites on false positives / false negatives, however, neither the spec nor the explainer include a feedback mechanism. Without more specifics, we would not be enthusiastic about duplicating an existing standards-track solution for the same use cases."
- Vivaldi is clearly opposed, per this blog post.
- Holdback as a mechanism is a weak defense against abuse. Some potential stakeholders are already suggesting to scrap holdback to support their use-cases (https://github.com/RupertBenWiser/Web-Environment-Integrity/...), leading to the possibility that it may not even be part of the final standard. Holdback is not technically enforced: a user agent can choose not to hold back, and if they are sufficiently popular they may induce web site operators to rely on their signal (at least for that browser) which would have the exact "DRM" effect that the proposal claims to avoid. The exact implementation of holdback matters a lot: if it's e.g. per-request, a site can simply ask repeatedly; if it's per-session or per-user, a malicious agent can pretend to be heldback the entire time.
- Since holdback is being touted as essentially the only defense against "DRMing" the web, it's a real mistake to have it be so poorly specified. The way it's currently specified makes it sound more like an afterthought than a serious attempt to mitigate harm.
- Compared to Private Access Tokens, WEI leaks far more information. WEI allows attesters to provide arbitrary metadata in their (signed) attestation verdict, whereas PAT tokens are fully opaque and blindly signed. Furthermore, PAT tokens can be in principle obtained through alternate attestation mechanisms (e.g. captcha, authentication, ...) without leaking the details of how that attestation is performed. WEI does not provide for this, and instead is designed around explicitly validating the "web environment".
Corporations (apple / Google / Microsoft / Nintendo? Sony). They all want a rental model along with a console model. iOS is already just this … a device in which you rent software as a service on a personal device that you restricted from modifying.
The consolifocation of personal computing has been moving this way for sometime. It’s essentially late stage capitalism gate keeping.
As a child of the 80’s is hard to watch things keep moving in this direction :/
it still boggles my mind that Apple->Safari, which is in the only choice on iOS - the dominant mobile OS in the US, already implemented and shipped a very similar feature but the reaction to Google's proposal is 10X worse. I have not seen a single #BoycottApple post here in this thread but more importantly, the sky did not fall after apple introduced this.
A concern is that websites vital to people's lives, such as banks and government services, will adopt this to mimic the control they have on mobile platforms. With few brick-and-mortar branches remaining, it leaves few options open.
I'm curious to hear from someone familiar with web development:
How much do websites invest in accessibility and related features that cater to a small audience?
Can we draw any conclusions from this to how websites will deal with accessibility to non attested users?
As noted in the article, Google comes up with a scheme like this every couple months. They also can’t seem to identify good sites anymore, based on their search results.
So… fuck it. Let them DRM their part of the internet. It is mostly shit nowadays anyway. They can index Reddit, X, and a bunch of sites that are GPT SEO trash.
We’re never getting 201X internet back anyway, so let Google and friends do their thing and everybody who doesn’t want anything to do with it can go back to the 200X internet. It was kind of disorganized but it it better than fighting them on DRM over an over again.
But can you get a token and then not send it and save it for later? That's more what I was thinking. Not replay attacks but gathering a bunch of tokens thst are valid but never submitted to the origin, and then provide them via api requests to those that need one to use unauthorized devices with that origin.
If we had known how fleeting the glory of the early 2010s internet would be, with everything ad-free and seo still comparatively rudimentary, would that have made it easier or harder to watch it die?
And everything was simpler, you could throw something up on a $10/month shared host. Now you need a full stack of services running in the cloud charged by the minute.
What unclear to me is how the actual verification by this attester would happen. Somehow the attester, which is also a remote service, verifies your device? Are there any details on how that would happen specifically?
Basically, you build up a set of cryptographically verified computing primitived (like secure enclave) that are enforced by a hardware component with baked in from the manufacturer keys. Basically it's setting up an "owned by vendor computing channel" and baking it into the Silicon.
You won't get the chance to refuse this feature. There'll be too much money at stake for manufacturers to not retool for it. It'll be the only thing they make to sell, so take it or leave it chump.
to call the write-up underwhelming is to be the most generous one can be. the minimum requirement that qualifies one to add 'unpacking' to title wasn't met. this all reads as a poorly argued opinion of something google is apparently trying to force down our throats. the specification isn't discussed (they're generous to point you to it though), a cursory mention of the supposed pros are mentioned but an even lazier attempt is made at describing the cons. really disappointing read!
read something recently that makes me think google is doing this to develop tools that allow browsers to detect replayed tokens on platforms like macOS and iOS.
The only way to oppose this is via regulators and antitrust legislation. You will not beat the Googlers in the marketplace or with some clever technical argument.
It's the insane power that companies like Google, Microsoft, and Apple hold over the tech world. It's like they can just dictate everything to suit their own interests, and it's the users who end up losing out.
Remember when Apple killed Flash? I heard it was because they wanted people to use their app store more instead of us playing games in the browser, so they could make more money. And Microsoft installing IE and setting it as the default browser? And now, Google is making changes to how we browse the web and adding things like Manifest v3, to boost their ad business.
The most irritating part is it is always gets packaged as being for our safety. The sad thing is I've often seen people even drink this user safety kool-aid, especially with Apple (like restricting browser choices on mobile - not sure if it's changed now).
I really think there should be some laws in place to prevent this kind of behavior. It's not fair to us, the users and we can't just rely on the EU to do it all the time.
> Remember when Apple killed Flash? I heard it was because they wanted people to use their app store more instead of us playing games in the browser, so they could make more money.
Even without the incentive of “moar profit$” they never entertained Flash because fundamentally, it sucked. When it landed in Android, it was a bloated mess that sucked the battery dry and was slow as molasses. On every platform it existed on, it was a usability and security nightmare. No, Apple “killed” Flash by making a sane decision not to allow it in their fledgling platform because Flash outright sucked, informed largely by the abhorrent performance on all platforms.
> And Microsoft installing IE and setting it as the default browser?
SMH. There was never an issue with Microsoft providing IE as a default initially - that came later with the EU. The biggest issue was that if an OEM (a Dell or an HP) struck a deal with Netscape to provide that as default, Microsoft threatened to remove the OEMs license to distribute Windows. In the late ‘90s and early ‘00s that would have been the death knell of an OEM. And that is the anti-trust part. They abused the position as the number 1 desktop os ( by a significant margin) to take control of the then nascent browser market.
Fundamentally the iPhone sucked. It came out without 3G when every other internet enabled phone had it, and constantly dropped calls in the US for the first 2 years until AT&t upgraded their networks. Android phones could run Flash just fine, it was a selling point for them, at least until the Google app store had enough content.
Android phones could not run Flash "just fine". There was no version of Flash released for any mobile device that was what anyone would call "good".
I was writing Flash-based apps/sites at the time and there wasn't a single device we had in our QA set that we thought was "acceptable" in its performance. It was buggy. It'd crash out of nowhere. It'd consume so much memory that user's apps were force quit left and right. It would kill a battery with a quickness such that we had one customer who had to carry multiple spare batteries just to use the app we wrote for their internal team.
I had an Asus tablet with the Nvidia Tegra 3 chip and 1GB RAM in 2011, it ran Flash sites fine, though obviously not at desktop-level quality. Flash games on on Newgrounds were hit or miss, but mostly because they weren't touch-friendly.
You probably recall that mobile internet in general was far from fluid in those days; Browsers couldn't handle multiple tabs well, and iOS would show an annoying mosaic if you scrolled web pages too fast (before the browser could render the page). I would rather have the option of having something imperfect available, than have the OS vendor lock them out entirely.
Yes. Every SECOPS person let out a collective sigh of relief when the weekly p0 patches for flash stopped coming. Apple may have been trying to push towards 'native' apps but that was almost certainly secondary; safari was leading the way on html5 APIs.
Let's not pretend that the death of Flash was a tragedy.
At the time (2013ish), I was working with a company that used to make a lot of very cool stuff in Flash; we were already starting most new projects in HTML5, and (coincidentally) the company was also growing like crazy (also in terms of new hires).
With that, at one point we actually started running low on physical space in the office. We've had a running joke (started by a Flash dev of course) that we'll just move all of the remaining Flash guys to the toilet...
But in all honesty, Flash was a terrible, absolutely horrible technology. I was lucky enough that I've only had to work with it from the backend, but I still remember the dread.
I think Adobe missed a huge opportunity where they could have built new tooling and a framework to target HTML5.
> Remember when Apple killed Flash? I heard it was because they wanted people to use their app store more instead of us playing games in the browser, so they could make more money
The original iPhone which killed flash didn’t even ship with the App Store. They assumed we’d only be using web apps.
It’s in the original Steve Jobs presentation when he announced the iPhone.
Would it be possible for someone using a zero day vulnerability to develop a botnet that will infect enough computers on the web, and their payload would be some way to modify browsers in a way to render them untrusted to WEI, and effectivelly render anybody infected out of the web? Would it be a new way to DDOS users out of the "trusted" web?
Can someone send attlestation requests from the range of residential ips with such frequency that the attlestation sequence is forced to captcha users, thus defeating it? You don't need the token response back from an attlestation, so you could spoof your ip and not worry about getting a response.
There is zero point debating this in technical detail because the proposal itself is evil. Don't get distracted by tone policing and how they scream you must be civil and whatnot.
Our best hope is kicking up a huge fuss so legislators and media will notice, so Google will be under pressure. It won't make them cancel the feature but don't forget to remember that they aren't above anti-trust law. There is a significant chance that some competition authority will step in if the issue doesn't die down. Our job is to make sure it won't be forgotten really quickly.
Yes, we need to protest. And I don't mean protest by slamming Google's github repositories with comments. That's not a protest. Go tell the media. Go tell your elected officials.
I also think web developers getting together like we did with SOPA/PIPA and raising awareness on our web properties can also help. How do we organize that?
Blocking Chromium altogether isn't as big of a deal as it seems, either (unless you're a truly huge website). It's so easy to switch to Firefox these days. Probably takes a few minutes. For technical blogs with useful content on them I suspect people's desire to see the content will override the inertia of switching browsers.
> Does not blocking Chromium devolve in behavior to a comparable level as WEI? Seems like the same problem is introduced: breaking the web.
Not really, for two reasons.
First, is that it can be bypassed, for instance with an extension which hides the relevant JS property and/or switches the user agent, or even on-the-fly edits the site's Javascript. The whole point of WEI is that it cannot be bypassed.
Second, is that just blocking Chromium does not prevent the development and use of new web browsers and/or operating systems, while a predictable consequence of WEI is making them non-viable in practice (they'd have to first convince Google that both the browser and operating system is DRM-ed enough that the user does not have enough control over the browser to make it do everything the user wants, and only then the browser would be allowed to access WEI-walled content).
I wrote to some senators today about this and also filed an FTC complaint against Google.
Talked to a few friends inside Google as well and they are also against it.
Firefox is going to be my default moving forward.
There is no reason or way to discuss it with technical merits anyway. Nobody can create a new issue on that repo, nor can they create a PR. Comments on reviews are also disabled.
Many of us are at technical spots that can do this. We need to bring back "Works best with Mozilla Firefox" pop-overs.
Basically my arguments were it's anti-competitive, against the open web, and a risk to country's security agencies. The latter while a valid argument is to hopefully rattle politicians and government agencies.
I can see it being useful to have a feature which could validate if another user on a website is a human. e.g: on reddit or twitter, the user you're talking to has a little checkmark (not the blue checkmark) next to their name if they've been WEI validated. Rather than refusing to let a user use the platform, just letting other users know that the person you're talking to isn't a bot
WEI doesn't check whether they are a bot though.... they can trivially use a "trusted" browser setup and just automate it with Selenium or whatever. Or in a worst-case scenario, a $5 robot arm, with a perfectly attested browser.
The whole "this will block bots" part of the spec is complete bollocks and a red herring to distract from the real purpose - to block adblockers and competition from new browsers. And DRM, of course.
I guess it depends how far the certification goes.
If even extensions can be detected, why wouldn't selenium be detected?
Granted, I don't know how it works exactly.
In addition to the 5$ robot arm you need to add 200$ for the device it is operating.
Drastically raising the cost to run a bot farm is key. You can't fully eliminate inauthentic behavior, but you can make a lot of it unprofitable.
You don't have to use selenium. You can use any software that can read video memory and act as a mouse and keyboard. It doesn't have to be an extension. The browser isn't directly involved, so vetting the browser or hardware does no good.
If McDonald's required 12 year-olds to use an ordering app because their banknotes might be stolen, would that be a reasonable compromise? Foreclosing the possibility of children not being tracked (which is illegal, btw) in exchange for some marginal benefit for big tech?
> It won't make them cancel the feature but don't forget to remember that they aren't above anti-trust law.
They can buy government many times over with their vast resources. This may be too late for that. What ideally should happen is that corporations this big should be split until each of the new entities meet the definition of SME. That's what is broken in the current iteration of capitalism. There is no real competition any more, so it no longer works.
this abuse of tech, potentially goes beyond antitrust, and damages global economic wellbeing, as well as impoverishing information systems on global scale, generating isolation, ignorance, division, and radicalization.
How to Email to the President and Members of Congress
The online form is the fastest way to send a message, but if you prefer to write or type a letter, keep the following in mind:
Use 8 1/2 by 11-inch paper
Either type your message or handwrite it as neatly as possible
Include your return address on both the letter and the envelope
Mail the letter to The White House, 1600 Pennsylvania Avenue NW, Washington, DC 20500
Include the appropriate postage (stamp)
If you have any additional questions about how to email Joe Biden or Kamala Harris, please post a comment below. If you are still trying to email Donald Trump or Mike Pence, please post a comment below.
Contact the White House By Phone
Even though you can’t email the President, you can call the White House. However, to be clear, you will likely only speak with a staff member. To call, use the following phone numbers:
For general comments, call 202-456-1111
To reach the switchboard, call 202-456-1414
For TTY/TTD, use Comments: 202-456-6213 or the Visitor’s Office: 202-456-2121
It is highly unlikely that you will get to speak with any sitting POTUS directly on the phone.
How to Send an E-mail Your House Representative
To find your representative, search the House of Representatives database by zip code. As an alternative, visit the Representative’s personal website. Most government websites have email and mailing addresses listed on the Contacts page.
Many websites also offer a contact form, but we recommend using this only as a last resort. Many online contact forms go to the website maintenance team and often don’t reach the representative or their staff. If you want a response, send a direct email or a letter.
How to Send an E-mail to Your Senator
To find your state Senator(s), select your Senator from the state-by-state list on the United States Senate’s Web site. Note the list is in alphabetical order and provides the following information for each senator:
Senator’s full name
Political party affiliation and state they represent
Mailing address
Phone number
Link to an email contact form, usually on the Senator’s website.
Also, you can call the United States Capitol switchboard at (202) 224-3121. A switchboard operator will connect you directly with the state Senator’s office you request.
Questions and Comments
If you have any questions about how to email the President, Joe Biden, U.S. representatives, members of Congress, or other government officials, please leave a message below. Please don’t post a comment on the form below and think it will be forwarded to the White House, Congress, the Biden administration, President Joe Biden, or Kamala Harris.
Why use quotes for "dangerous" when the first sentence is literally: "Why Vivaldi browser thinks Google’s new proposal, the Web-Environment-Integrity spec, is a major threat to the open web and should be pushed back."
>It will actually be very positive for the web overall and you'll see the benefits soon enough.
What might those benefits be? Not being snarky here, but AFAICT the only folks who gain any benefit seem to be Google and their customers (advertisers).
Very controversial take but I think this benefits the vast majority of users by allowing them to bypass captchas. I’m assuming that people would use this API to avoid showing real users captchas, not completely prevent them from browsing the web.
Unfortunately people who have rooted phones, who use nonstandard browsers are not more than 1% of users. It’s important that they exist, but the web is a massive platform. We can not let a tyranny of 1% of users steer the ship. The vast majority of users would benefit from this, if it really works.
However i could see that this tool would be abused by certain websites and prevent users from logging in if on a non standard browser, especially banks. Unfortunate but overall beneficial to the masses.
Edit: Apparently 5% of the time it intentionally omits the result so it can’t be used to block clients. Very reasonable solution.
Captchas are intended to stop bots. WEI is intended to vet that the hardware and browser has been validated. That doesn't impact bots, because you can implement bots on top of a valid hardware and browser so it will pass the WEI check.
I remember the discussions on Slashdot many years ago about the "analog hole"; you can have all the DRM you want, but people can still point a camera at the screen and record a non-encumbered copy that way. This is definitely the case with automating web activities; you take a trusted computer, point a camera at it, and have your bot synthesize keypresses and mouse movements. There is absolutely no way for a website at the other end of the Internet to know that a human is using the computer. (I see this as the "end game" for FPS cheating. I don't think anyone is doing it yet, but it's bound to happen.)
I'm guessing the reason we want attestation is so that Chrome can drop ad blockers and websites can drop non-Chrome browsers. But there is no reason why you can't do the thing where you point a video camera at a monitor, have AI black out the ads, and then view the edited video feed instead of the real one.
The only use for attestation I see is for work-from-home corporate Intranets. Sure, make sure that OS is up to date before you're willing to send High-Value Intellectual Property to the laptop. That... already works and doesn't involve web standards. (At my current job, I'm in the hilarious position where all of our source code is open-source and anyone on Earth can edit it, but I have to use a trusted computer to do things like anti-discrimination training. It's like opsec backwards. But, the attestation works fine, no new tech needed.)
> I see this as the "end game" for FPS cheating. I don't think anyone is doing it yet, but it's bound to happen.
You're behind the times. It's not widespread but it's been happening for years.
Also the other day selenium author (iirc) said they are working on such a thing for "automated testing"
So this proposal will do nothing to prevent bots; maybe increase the cost a little.
On the other hand, it will surely discriminate people, new emerging technology and companies. No other search engines can be built. No new browsers. No openness.
Anyone supporting this proposal is either pure evil or stupid or both.
> and have your bot synthesize keypresses and mouse movements
Is this truely going to work though? Captcha provider already monitor mouse and keyboard movement while on the page. Can you really "synthesize" human-like mouse movements around the page? I'm not so sure.
Captcha providers can't rely exclusively on mouse movement because of accessibility considerations, and it seems pretty easy to emulate human-like keyboard interaction. Emulating realistic mouse movement is more difficult but probably doable too.
I bet it's pretty easy. Capture your own mouse movements from one place to the next as denoted by clicks. Then train a model on reproducing those movements, using your captured data of movement from points A to B. It would probably generalize well enough to pass the verifications. Humans are very unpredictable, so I assume those are mostly looking for superhuman speed and accuracy.
This is also how you know the "this won't impact extensions" talk is likely nonsense.
If you can still run extensions you still need captchas. So one possible road this takes is Google launches it, everybody still uses captchas because extensions in desktop browsers still make automating requests trivial -- and then we lock down extensions because "we already locked down the hardware and we really do need to do something about captchas..."
> We're still discussing whether each of the following pieces of information should be included and welcome your feedback:
* The device integrity verdict must be low entropy, but what granularity of verdicts should we allow? Including more information in the verdict will cover a wider range of use cases without locking out older devices.
* A granular approach proved useful previously in the Play Integrity API.
* The platform identity of the application that requested the attestation, like com.chrome.beta, org.mozilla.firefox, or com.apple.mobilesafari.
* Some indicator enabling rate limiting against a physical device
WEI acts as proof that "this is a browser", not "this is a human". But browsers can be automated with tools like Selenium. I'd guess that with the advent of complicated, JS-based captchas, browsers under automation are already the major battleground between serious scrapers and anti-bot tools.
I also don't understand how WEI does much to prevent a motivated user from faking requests. If you have Chrome running on your machine it's not gonna be too hard to extract a signed WEI token from its execution, one way or another, and pass that along with your Python script.
It looks like it basically gives Google another tool to constrain users' choices.
> But browsers can be automated with tools like Selenium
And I will bet anything that if the browser is being instrumented via webdriver it will attest as such. You would have to automate the browser externally.
Will it attest that it's running an extension? I can intercept and modify web requests, redirect web requests, and send web requests to other domains through a web extension. I can also scrape the HTML and I can use native messaging or normal HTTP requests to send that information out of the browser. And I can also modify CORS headers to get rid of restrictions around sending requests from another domain.
I can't literally emulate mouse movements but the only place that matters is... captchas. If you're not watching for those kinds of behaviors, then a browser even without webdriver can be automated just fine. And if you are watching for those behaviors, then you're running a captcha, so what is WEI helping with?
Google claims this is not going to impact browser extensions, debugging, etc... but if it's not going to impact that stuff, then it's not really helpful for guaranteeing that the user isn't automating requests. What it is helpful for is reducing user freedom around their OS/hardware and setting the stage for attacking extensions like adblockers more directly in the future.
Most captchas these days are already only there to enforce Google's monopoly. If you use and "approved" browser and let them track you, you don't get one, browse anonymously and you can't get past. That ship has already sailed and it's already evil, anticompetitive behavior.
> We can not let a tyranny of 1% of users steer the ship.
Far less than 1% of my users use the accessibility features. In fact, it is closer to 1% of 1%. Does that justify the far, far easier development and bug testing that I would enjoy if I were to stop providing accessibility features?
> We can not let a tyranny of 1% of users steer the ship.
Normally I'd agree with you on that the tyranny of the minority is a bad thing, but sometimes the minority actually has a point and this is one of the cases where the minority is _objectively_ correct and letting the majority decide would end up in a complete dystopia. Democracy only works if everyone is informed (and able to think logically/critically, not influenced (either by force or by salary), etc.) and in this case the 99% simply do not have any clue on the effects of this being implemented (nor do they care). This entire proposal is pure orwellian shit.
That is not controversial at all, but rather a plain fact about the short term incentives! If adoption of this technology weren't an attractor, then we'd have nothing to worry about. But the problem is the functionality of this spec, supported by the fundamental backdoor of corporate TPMs, is set up to facilitate power dynamics that inevitably result in full corporate control over everyone's computing environment.
There are obvious benefits here. The ability to remove captchas is one, the ability to ensure that clients are running the latest updates before accessing sensitive content, etc.
But the power is too significant. If it were some small subset of positive assertions I'd be ok with this, but the ability to perform arbitrary attestation is beyond what is required and is far too abusable.
I built a new PC for a friend, and getting the AM5 platform stable was ridiculously challenging, so there were several reinstallations of Windows involved. He didn't use a password manager, so there were a lot of logging in, password resets etc. involved. For virtually every service he had to login to he was asked to complete a CAPTCHA. For Steam in particular: he had to do the first login on the website, because the CAPTCHA inside the application appeared to be bugged and was more like psychological warfare than human-verification. The frustration was palpable.
Also turn on a VPN some time (a signal to Google et al. that you're trying to bypass content region-restrictions, or funnel mobile traffic through an ad-blocker) and you are basically guaranteed to see nothing but CAPTCHAs from the predominantly CloudFlare owned and operated Internet.
So yes, it's a big problem, but only if your web environment (tracking metadata) are not sufficiently "trusted" :D
Quite a few: brave browser + mullvad vpn.
I enjoy doing captchas wrong, manly because i can't believe how US fire hydrants, busses, and crosswalks have become so important to me.