Requiring authorized silicon (and software) isn't even the biggest problem here.
They do not use zero knowledge proof systems or blind signatures. So every time you use your device to attest you leave behind something (the attestation packet) that can be used to link the action to your device. They put on a show about how much they care about your privacy by introducing indirection into the process (static device 'ID' is used to acquire an ephemeral 'ID' from an intermediate server) but it's just a show because you don't know what those intermediary severs are doing: You should assume they log everything.
And this just the remote attestation vector, the DRM 'ID' vector is even worse (no meaningful indirection, every license server has access to your burned-in-silicon static identity). And the Google account vector is what it is.
There are several possible reasons for this, the obvious one is that they want to be able to violate your privacy at will or are mandated to have the capability. The other is that because it's not possible to link an attestation to a particular device the only mitigation to abuse that is feasible is rate limiting which may not be good enough for them - an adversary could set up a farm where every device generates $/hour from providing remote attestations to 'malicious' actors.
> The other is that because it's not possible to link an attestation to a particular device the only mitigation to abuse that is feasible is rate limiting
I still don't see how you can keep something anonymous and still rate limit it. If a service can tell that two requests came from the same party in order to count them then two services can tell that two requests came from the same party (by both pretending to be the same service) and therefore correlate them.
The way it would work with blind signatures is that the server will know the device that comes to it to request a blinded signature and will be able to rate limit how often that device asks it.
But once you get the response you can unblind the signed signature and obtain the token (which is just the unblinded signature). This token can then be used once either because its blacklisted after use (and it expires before the next day starts for example).
The desired property of blind signatures is that given a token it's information theoretically impossible to determine which blinded signature it came from (because it could have come from any of them) even if the cryptographic primitive is broken by a mathematical breakthrough or a quantum computer. There is technically the danger that if the anonymity set is too small and all the other participants collude you can be singled out.
Correlating times is a threat vector that needs to be managed either by delaying actions (not tolerable by normal users) or by acquiring tokens automatically and storing them in expectation. Or something other I haven't thought of probably. There is also a networking aspect to this, you will need a decentralized relay server network that masks origin of requests.
> But once you get the response you can unblind the signed signature and obtain the token (which is just the unblinded signature).
The premise of this is to keep the person issuing the tokens and the person accepting them from correlating you.
The issue is when you have more than one service accepting them. You go to use Facebook and WhatsApp but they're both Meta so you present the same unblinded signature to both services and now your Facebook and WhatsApp accounts are correlated against your will. And they have a network that does the same thing, so you go to use a third party service and they require you to submit your unblinded signature to Meta which allows them to correlate you everywhere.
That's the point. You go to example.com and get the "sign in with Google" box as the only login option, but now you can't have separate uncorrelated Google accounts. Or if browsers do it automatically then every site does a background load or redirect through adtracker.nsa so you're presenting the same token on every service.
It's not the user who wants any of this to begin with. "You would never do that" except that it's now the only way to be let into the service.
If A adopts a Blind Signature scheme it implies A is cooperating in establishing privacy infrastructure. If A is so malicious that it would advertise a sound privacy system and then it immediately sabotages it that's a different matter...
The proposals are generally to have the government do the blind signature scheme. But then even if they do so in good faith, the ad services will immediately set to work thwarting it.
I'm as biased against cryptocurrency as everyone, but couldn't we have the requestor do a bit of mining work to mint that initial id? I mean, if the service is actually making a bit of money from each request, the need for rate limiting just vanishes, right?
If proof of work is the "payment" to prove that you're human, many AI startups will outbid poor people living third world countries. They will even outbid some Americans.
Yes, those AI startups can also buy cheap Android phones at scale, but it's a bit harder because they'll pay for stuff that their bots have no use for (a screen, a battery, a 5G radio, software, branding, distribution, customer support etc).
> If proof of work is the "payment" to prove that you're human, many AI startups will outbid poor people living third world countries. They will even outbid some Americans.
The difference is that if you're human you can create an account and then carry on using it for decades, whereas if you're an aggressive scraper bot or spammer then you get banned and have to buy new accounts over and over.
As I see it, living requires money. If we have people on this planet that are too poor to digitally prove that they're alive, then we need to figure out a way to distribute the Earth's wealth more equally in general, rather than to require hardware attestation, which seems to be worse on essentially every metric, including inequality.
Attestation is a service, like every other service. Why should it necessarily be free? Especially now that we all know that "free" on the web means ads & tracking?
I think we should just accept that some things should cost a bit of money and move the discussion to "how much should it cost", rather than trying to sweep economics under the rug.
> I still don't see how you can keep something anonymous and still rate limit it.
Constructions like this exist for many years. E.g. semaphore RLN (rate limiting nullifier). This particular construction was found unfeasible 7 years ago, but since then zksnark tech made huge progress and it is way cheaper now.
Just to give an example to prime your intuition: define your "usage token" as H(private_key|service_domain_name|date|4-bit_counter). Make your scheme provably reveal the usage token when you authenticate. Now you can use the service 16 times a day on a particular domain and no more simply by blocking token reuse. And yet the service has no ability to link different tokens to each other or to a specific person because they don't have anyone elses private keys.
You can make variations on this for a wide spectrum of rate limiting behaviors.
But also I agree with xinayder's comment-- the anticompetative, anti-privacy, invasive surveillance is unacceptable. There is a lot of risks with ZKP's that we just make the poison a little less bitter with the end result being more harm to humanity.
I think ZKP systems are intellectually interesting and their lack of use helps make it more clear that the surveillance is really the point of these schemes, not security because most of the security (or more of it) could be achieved without most of the surveillance.
But allowing the apple google duoopoly to control who can read online is wrong even if they did it in a way that better preserved privacy.
I have sympathy for the desire but that isn't something you actually get through google's surveillance-ware.
You can change the information you put into the hash in my example to get them one go per site per day or one per year or even one per site ever. But without giving cross site linkablity that does you no good or giving google visibility into everyone all the time.
But that still doesn't get you to your desired unevadable bans, but with suitable parameters it can get as close as google's spyware approach while being much more private.
I think time a time oriented rate limit makes the most sense considering the limits in practice (attacker just gets access to another discarded phone, or tricks someone into authenticating for them via theirs)-- basically means the best you can do against dedicate attackers is rate limit them. So why subject honest users who may have good privacy reasons to use multiple accounts over time to worse effective limits than attackers?
But you don't have to agree with that to accept that schemes much more private than google's are possible.
Because-- in this hypothetical-- your user agent restricts the usage to the name displayed on the screen and also because your agent won't send the same value twice either (it'll increment the counter or tell you that its run out of tokens).
Requiring the name to be displayed isn't going to do much for ordinary people. They mostly wouldn't look at it and even if they did, "continue as-is or no service for you" means they continue as-is.
Not sending the same value twice would prevent them from being correlated, but now what are you supposed to do when you run out? Running you out could even be the goal: You burn a token to get a cookie and now you can't clear your cookies or you'll be denied a new one since you're out of tokens.
I'll be the first to admit that the technology can be abused-- that it's even ripe for abuse. That sort of problem can be avoided by allowing 'enough'-- and if the goal is to just prevent a site being flooded out 'enough' could be pretty high.
Of course, I think the effective purpose of google's attest feature is to invade everyone's privacy which we should assume is part of why they don't use privacy preserving techniques. Privacy preserving techniques could still be abused, however.
Maybe they're even worse for humanity because they make bad schemes more palatable. I think right now I lean towards no: the public in general will currently tolerate the most invasive forms of these systems, so our issue isn't that they're being successfully resisted and the resistance might be diminished by a scheme which is still bad but less bad.
Can we stop normalizing being surveilled online and on our devices?
Saying something like "the problem is not hardware attestation, but that they don't use ZKP".
You are normalizing the new behavior. You shouldn't. It doesn't matter if they use ZKP or the latest, secure technology for hardware attestation. The issue is hardware attestation. It's the same with age ID. The issue is not that Age ID is prone to data leaks, the problem itself is called Age ID.
Let them know. Write a letter to the CEO. And vote with your wallet and switch banks if you can. There's always a bank willing to offer you a non-app 2FA scheme.
Banks don’t do this because of profit. They do it because of decades of laws pushing in this direction. Anti-money laundering, know your customer, digitalised currency, abandoning cash, preventing tax evasion etc… it’s been getting more extensive over time.
None of the things you mentioned inherently require the user to own (and babysit) an expensive general-purpose computing device produced by tracking-obsessed adtech giants and with software obsolescence built into the product.
I think you're naively presuming the issue is simple and easy to address with a letter.
Regardless of your bank, payment systems such as Visa and Mastercard have blocked transactions involving mainstream online stores such as Steam because they unilaterally deemed some games to be problematic. You cannot fix this problem with an email.
These are two unrelated problems. One is "payment systems use imperfect heuristics in their own operations to fulfil their regulatory obligations." The problem I was referring to is "banks push 2FA onto end users but are unwilling to give them alternatives that don't involve meddling with the user's own most private and expensive device."
The latter is absolutely a thing where customers can (and should IMO) push back hard.
> These are two unrelated problems. One is "payment systems use imperfect heuristics in their own operations to fulfil their regulatory obligations."
No, they are not. You have people reliant on this software infrastructure for very basic aspects of their life such using their own money to buying whatever they feel like buying, and you have people being deprived of their rights because operators of said infrastructure actively prevent and deny their rights to do so. This has nothing to do with heuristics, and everything to do with granting people the power to dictate what you may or may not do with the things you own.
Do you think banks are using attestation gratuitously? It helps prevent a lot of fraud. You are opposing something that saves people’s savings every day just because you think it takes “freedom” away from a few hobbyists. Do you even have a phone that does not support hardware attestation or is all this posturing about something hypothetical?
Can you show me examples where locking down an OS has prevented fraud in banking?
Honestly, if the only way to secure your banking system is by locking down users' devices, there is something really bad going on at your end, security-wise. Your system should be secure even without locking down user hardware.
One of the threat models is that a fraudster tricks a non-technical user into installing malware, which then manipulates the user interface so that next time the user tries to send money to Bob, it actually goes to Mallory.
That's a legitimate concern, and one of the causes why PSD2 mandates that all 2FA devices must have a display that shows the user where they're about to send the money and how much.
And one of the threat models that police use in the US is tracking women suspected of going for abortions through the use of road cameras, and other surveillance methods.
Once you have the attestation in place you have no guarantee who is going to get access to data like what apps are present on your device, and there will be nothing you can do to stop it.
Meanwhile, we could educate people against common scams.
How is this not just trading one smaller bad for a bigger bad? Why is this touted as an improvement?
That's why I'm strongly against remote attestation of general-purpose hardware.
I use a handheld card reader with a display as a 2FA for my bank transactions. It shows me the transaction and, after I confirm, sends a TAN to the bank. It is not a general-purpose device but a certified, tamper-evident/-resistant black box that does just that one thing.
> Meanwhile, we could educate people against common scams.
There's a million ways you can get scammed, no matter how many hours of training you've had.
You can't educate (many) people against common scams. But people should have the freedom to opt out of surveillance in their private lives, at the risk of exposure to scams.
When online banking was first created it was an absolute chaos zone. Everyone was accessing it from desktop machines riddled with viruses and malware. There are endless stories of being discovering their life savings had been wired to Belarus by some malware running on their machine that had grabbed their banking credentials when they logged in.
> U.S. prosecutors say Citadel infected more than 11 million computers worldwide, causing financial losses of at least a half billion dollars.
Half a billion dollars, by a single guy with a single virus!
Different parts of the world came up with different solutions for this. The US made all ACH payments reversible and international wires difficult, but that just meant the receiver paid for fraud instead of the person whose machine was full of viruses. This was an obviously bad set of incentives and hacky panic-based fix. Banks elsewhere in the world settled on providing users with authenticator devices that looked like small calculators into which you could type transaction details after plugging in a smart card. Malware could still steal all your financial data but it couldn't initiate transactions.
Obviously, all this was a hack. What was needed was computers that were secure. Apple and the Android ecosystem eventually delivered this, and the calculator devices were retired in favour of smartphones with remote attestation. This was better in literally every way, for 100% of users. Firstly, it protects financial privacy and not just transaction initiation. Secondly, it's a lot more convenient to use a device that's always with you than a dedicated standalone single-use computer. Thirdly, adding remote attestation made no difference because that's what the calculator devices were doing anyway. Fourthly, even in the case of customers of small American banks that weren't capable enough to manage dedicated hardware rollouts, getting rid of fraud instead of pushing liability around allows for lower prices and fewer headaches.
So remote attestation is a non-negotiable requirement for digital banking of any form. When Microsoft didn't deliver most banks preferred to literally manufacture and sell their customers single-use smartcards that remotely attested by you manually copying numbers back and forth between screens. Or they hid the cost of rampant fraud in the price of other services until such a time that Apple/Google saved them.
> Secondly, it's a lot more convenient to use a device that's always with you than a dedicated standalone single-use computer.
The price the owner pays for this is that they're locked out of their own expensive general-purpose computing device while still having to bear all the inconveniences (babysit OS updates, configure stuff, keep it charged, have the battery fail, buy a new device every five years, etc.)
In the meantime, the standalone chip-and-TAN device costs 30 bucks, is powered by three AAA batteries that hold their charge for five years, lives for 20 years, and never needs a single software update.
I'd choose the small single-purpose device over the enshittified, locked-down smartphone every single time.
Spectre doesn't work across process boundaries, so I don't think they are. You can't Spectre your way into a banking app on an iPhone. Or if you can I'd like to see it in action.
I don’t think "Spectre doesn’t work across process boundaries" is correct as stated; cross-process and cross-security-domain Spectre attacks have been demonstrated. But I agree that "a malicious app can trivially Spectre its way into an arbitrary banking app on a patched iPhone" is a much stronger claim, and I’m not aware of a public demonstration of that exact attack. My point is only that process isolation alone is not, in principle, a complete answer to Spectre-class attacks.
The only similar bug I'm aware of was Meltdown, an Intel only bug that was immediately patched with a microcode update. But Meltdown was a different bug to Spectre. Spectre is a class of attacks that's hard to solve by design, Meltdown was a specific bug that was easy to solve.
You could also open your front door with your smart phone. It would look high tech until your battery is empty.
Sometimes I see people captured by the train station unable to check out. They usually find someone with a charger but technically the formula is to fine them for not having a ticket. Then one might still need to buy a ticket to continue the journey. (bring cash)
Phones are usually empty when things [already] aren't going as planned.
Back in my iPhone days, I once got bitten by a bug where the app developer failed to raise that flag "dear OS, I'm in the middle of presenting a ticket for optical scanning, and it would be really amazing if you could just, you know, not disturb the screen with random shit for a couple seconds."
Unfortunately for me though, the turnstile that I was about to pass to exit the train station had both an optical scanner and some NFC thing lumped into the same physical module, and every time I tried to scan my ticket, the phone would raise its NFC screen and hide the 2D matrix code.
So yes, you can have a fully charged phone and a perfectly valid ticket with the latest software and still get stuck in a train station.
>....the calculator devices were retired in favour of smartphones with remote attestation. This was better in literally every way, for 100% of users.
Not 100%. A robber can force people to activate facial recognition or finger print sensors. Forcing someone to type a pin code is harder but doable. If one doesn't bring the authenticator & bank card they cant initiate transactions.
Remote attestation is a technology, not a policy or a political effort, so it can't be inherently evil. You can disagree with all its known or proposed uses, but then I think it makes more sense to name these.
DRM is a technology and is inherently evil.
Web attestation is DRM for the web, and is inherently evil.
Age ID is a technology and is inherently evil.
We have over 30 years of the world wide web and for these more than 3 decades this was never a problem. Suddenly, we "need" to create new technology that seem to be security features, but are essentially just being used for evil, thus being inherently bad.
It's not like these technologies were created for the greater good and misappropriated by bad actors. They were proposed by bad actors in the first place, they cannot not be inherently good.
DRM is arguably a specific use of various generic technology ranging from whitebox cryptography to trusted computing.
I don't think remote attestation (or even more so its umbrella technology, trusted computing) is nearly as specifically targeted as DRM.
> We have over 30 years of the world wide web and for these more than 3 decades this was never a problem. Suddenly, we "need" to create new technology that seem to be security features, but are essentially just being used for evil, thus being inherently bad.
I agree that requiring remote attestation for generic web use is evil. It's way too heavy-handed an approach better reserved
I still don't think this somehow outright disqualifies the technology itself.
>I still don't think this somehow outright disqualifies the technology itself.
A technology squarely and 100% percent intended to give people other than the end user the ability to sleep soundly at night knowing those dastardly end users can't muck with their software (the non-end user) on their (the end user's) devices is only a tool for the authoritarian minded. Sorry mate, but if you're sitting here thinking it's useful and neutral, you are part of the problem, because you're eyes-wide-shutting the fact the only people gaining from the technology are those that already have a terrible trustworthy-ness record in terms of not abusing the sovereignty of another person's machine.
Show me an industry that ships source code, and manuals with all software that runs on the device, along with hardware manuals and the manuals to write your own drivers and doesn't use hardware primitives to enforce their business models over you, then we can talk about an industry where "trusted computing" might be neutral to the end user. History has not seen this relationship bore out, however.
The "Trust" in "Trusted Computing" has only ever been realistically unidirectional in terms of favoring entrenched industry players. As a rule of thumb, if the primary benefactors of a feature are over 90% legal fictions; your feature ain't neutral. It's hostile to humanity. Period.
> Show me an industry that ships source code, and manuals with all software that runs on the device, along with hardware manuals and the manuals to write your own drivers and doesn't use hardware primitives to enforce their business models over you, then we can talk
>We have over 30 years of the world wide web and for these more than 3 decades this was never a problem.
Are you seriously trying to suggest copyright infringement has not been an issue over the last 30 years? Both of them are solutions to problems that we've had over the last 30 years and were created for the greater good to solve problems that developers were facing.
Grocery stores are a trillion dollar industry yet you will see stores that close due to theft being possible. The simplest way games and music struggle is losing a sale because people can play them without paying.
Individual self employed photographers successfully use the DMCA to get significant payouts from large publishers and news organisations every single day.
Different technologies may selectively amplify existing power. If the actions that it enables are disproportionately evil, it may at the very least be considered very useful for evil.
Suppose someone invents a mind-reader that lets the user read the thoughts of anybody else in range. But the mind-reader requires great up-front costs to produce and also allows people with stronger readers to remotely destroy weaker readers, where strength is basically a function of cost.
In a vacuum, the mind-reader is "just a technology". But it aids autocratic surveillance much more than it aids citizens who want to surveill back. It's "neutral" but its impact is decidedly not.
TPMs and remote attestation enable entities with power to enforce their existing power much more effectively. In contrast, a general-purpose computer does the opposite because anybody can run whatever code they want, they can adversarially interoperate with anybody they feel like, and so on.
One of these is more evil than the other, even though they're both "just technologies".
I think people are too quick to dismiss the possibility that some technologies are just bad and harmful and we can't shrug off responsibility and say I'm just making a neutral technology and the people using it are the ones causing harm.
I have 2 servers, Alice and Bob, Bob has a secret, I want Bob to be able to share that secret with Alice. However, I want Alice to be able to prove to Bob that it is actually Alice, that it is running the correct AliceOS, and that AliceOS was loaded on bare metal Alice without nefarious pre-book or virtualization hooks.
A TPM with measured boot (SecureBoot) does exactly this, remote attestation is how Alice proves to Bob that it is in a trusted configuration and wasn't tampered with.
That's the academic viewpoint, but in practice it's used for far more hostile purposes.
(One argues that since you own both of them, you should simply set up the two servers yourself with a key of your own choosing, asymmetric or otherwise, and then restrict physical access to them.)
Alice runs many services and has a rather large attack surface. I don't want Alice to persist those secrets, only to have them briefly at startup (think joining tokens). Bob however has exactly one job, verify that Alice-1 to Alice-N are in a trusted configuration before granting them access to the cluster.
Very recent events in the Linux kernel prove that it isn't safe to assume "0600 root:root" is sufficient to protect secrets from a misbehaving container.
As someone who wanted to improve users security, that’s exactly why I find this thread fanatical opposition to attestation baffling. Nearly everyone uses a device that supports hardware attestation. It’s the best available tool to protect users from malware. We do implement a fallback that lowers security but lets the few users who have devices not able to attest properly to continue, but that really lowers security since we can’t even know if the device cryptography is itself compromised and hence can’t really trust anything it sends. If you have a different solution, do share it! I would love to use something you guys don’t find abhorrent! But until then I don’t really see the reason for all this negativity.
Sadly, the problem isn't the TPM or Remote Attestation. It's Google et al choosing to only talk to devices and software they like without concern for what the user wants or trusts. Compounded by everyone else just going along with it.
A TPM where the device owner can't take ownership of the root key is worse then no TPM at all.
If the price to pay for security is freedom, then let users's devices be insecure. With time, they will learn good security hygiene. And if they don't, maybe they don't deserve it.
The policy is "I will not let you access this system unless your system software implements this technological protection."
A camera is technology. A security camera is policy, because it's a camera hooked up to policies on how to watch, record, and respond to what is required, and it is a political effort when connected with laws about face masks, prohibiting spray painting of the cameras, and allowing privacy intrusions.
How should a government act to prohibit misrepresentation of one’s characteristics online, from accessing services for which that government has formally defined regulations based on characteristic into law?
If your answer is “they shouldn’t ever do that”, then you’re promoting an uncompromising position that governments are disinclined to adopt, being the primary user of identity issuance and verification on behalf of their citizens.
If your answer is “they should do that differently”, then you have a discussion about (for example) ZKP or biosigs or etc., such as the thread you’re replying to.
Which of these two paths are you here to discuss? I want to be sure I’ve correctly understood you to be arguing for the former in a thread about the latter.
You're not necessarily being surveiled just because you're forced to authenticate yourself. It often is the case practically, but it's not inherent, and mixing the two up makes the discussion too imprecise in a technical forum.
Hardware attestation often also has problems of centralization, but that's something else as well.
By just labeling it as an abstract bad thing without seeing nuance, I'm afraid you won't be convincing those in power to pass or block these laws, or those convincing your fellow voters which efforts to support.
I think labeling this an abstract problem because all the existing implementations as having concrete but different problems is a little bit of a Motte and Bailey fallacy.
The surveillance of the future will be powered by the things we produce today. If the accepted algorithms leave cookies those cookies will be used tracked and monitized. The bad argument is the forced verification to do things on the internet. Making that start at the hardware is a lock in thats not okay. Business will always own the services and making standards that trade our practical liberty for the sake of security is a very compromised position in my opinion.
And it does start with the age verification, followed by id checks, etc. Its compromising precisely because no lines are drawn and no rights to privacy are codified in law. Without guiderails the worse path will likely be taken for maximum profit
A counterexample is not a valid refutation of the general point. It can be both true that Google will deanonymize you, given the chance, and that anonymous attestation is possible.
Having thought about ads, what is the ideal feedback info channel loop from manufacturers to consumers? How best to distribute the information of who can manufacture what at what cost/price and what does it do and when is it appropriate for consumers to receive or pull info from where? And if it ends up being a monopoly of 1 centralized system how do you allow for a competitor to break through without ads?
> It often is the case practically, but it's not inherent
Oh my god. It's 2026, and we're still repeating the "I trust Apple/Google/Microsoft enough to resist the government" spiel.
Hardware attestation is a surveillance mechanism. If China was enforcing the same rule, you would immediately identify it as a state-driven deanonymization effort. But when the US does it, you backpedal and suggest that it could be implemented safely in a hypothetical alternate reality. Do you want to live in a dystopia?
> Oh my god. It's 2026, and we're still repeating the "I trust Apple/Google/Microsoft enough to resist the government" spiel.
Who is?
> But when the US does it [...]
I don't live in the US, and while US is often setting global trends, in this case I don't think that's actually that likely, unless it somehow goes significantly better (i.e., the benefits actually vastly exceed the collateral damage to anonymity and resiliency via heterogeneity) than expected.
There is a problem where it's becoming increasingly harder to determine which internet packets that are coming to your service are at the behest of a human in the course of normal activities or an automated program.
If all the internet was is static content, that wouldn't be much of a problem. But we live in world where packets coming to your service result in significant state changes to your database (such as user generated content).
I suspect that we are currently in the valley of do-something-about-it on the graph which is why you see all this angst from the big players. Would Google really care if automated programs were so good that they were approximating real humans to such an extent that absolutely no one can tell? I suspect they would not only be happy with such a state of affairs, they would join in.
> Requiring authorized silicon (and software) isn't even the biggest problem here.
It is indeed the biggest issue. It prevents be from owning and using the hardware I pay for, own, or make myself. It's switching the personal computers as we know it from being open to proprietary and owned by 2 large US corporations.
I simplified the process in my description. The DRM ID Android has is not what I was referring to.
I was referring to the static private key that is stored in the silicon. At any time an application can initiate a license request process using DRM APIs which will elicit an unchangeable HWID from your device. The only protection is that it will be encrypted for an authorized license server private key so collusion may be required (intel agencies almost certainly sourced 'authorized' private keys for themselves). Google or Apple also has the option to authorize keys for themselves. In 'theory' all such keys should be stored in "trusted execution environments" on license servers and not divulge client identities for whatever that's worth: <https://tee.fail>.
The "license challenge" (it might be a mistake I think it's supposed to be a license request) is just a packet (that can be saved and later sent to anywhere) and it contains the encrypted certificate which doubles as your HWID. An adversary needs to control the private key of the license "server" the challenge is for (this is a privacy measure introduced to prevent the CDM from offering the HWID to anyone who wants it). Now if you want the HWID you need to work for it (one time) by stealing a private key, bribing/blackmailing employees or issuing secret edicts ("here is a new license server we need a certificate for"). Working for Hollywood is also an option I suppose.
Pirates sacrifice devices when they publish ripped content due to the certificate being revoked after Hollywood downloads the torrent and by doing things like this:
For large-scale per-viewer, implement a content identification strategy that allows you to trace back to specific clients, such as per-user session-based watermarking. With this approach, media is conditioned during transcoding and the origin serves a uniquely identifiable pattern of media segments to the end user.
Ultimately, the point of hardware attestation isn't to ensure that your device is trusted, but that the action you're trying to perform was done by a human, not a bot doing millions of them per second. It's just another CAPTCHA mechanism in disguise, required because bots have gotten so good at solving the existing ones.
With a secure device, the only way to get an attestation for an account signup is to do the signup on that device, with real fingers clicking real buttons on a real screen. There's no way to short-circuit the process by automatically sending a JSON request and bypassing the actual signup flow from a Python script, like you can do with an insecure endpoint.
With blind signatures, a single compromised device destroys the value of the entire scheme, as it can be used to issue an infinite number of attestations with 0 human oversight.
What we need is a blind signature construction where the verifier can revoke a signature, each signature carries proof that none of the revoked signatures comes from the same signer, and where it is impossible for one signer to issue more than n distinct signatures during one time window. Not sure if this would be possible with ZKPs; my cryptography knowledge doesn't extend that far.
> Ultimately, the point of hardware attestation isn't to ensure that your device is trusted, but that the action you're trying to perform was done by a human, not a bot doing millions of them per second. It's just another CAPTCHA mechanism in disguise, required because bots have gotten so good at solving the existing ones.
...no? Maybe this is true of end-user device attestation. But there are other use-cases for attestation.
Server device attestation is an entirely different thing. It's used in e.g. IaaS "Confidential VM" offerings, where the audience for the attestation information is the customer, rather than the server host. It's a very pro-privacy / pro-data-sovereignty feature.
And while embedded device attestation is sometimes about preventing customers from tampering with IoT stuff you "sold" them, more often it's about being able to trust and confidently assert that e.g. the climate sensors you've deployed all over a forest as part of a research project haven't been fucked with to report false data by someone with an agenda. (Or to "apply denial" to your unmanned military satellite downlink station the moment you detect that there's some unknown person out there futzing with it.)
Can you revoke certificate for a specific device using privacy schemes?
Like imagine that someone managed to extract key from the specific device and distributed that key in a software implementation to fake attestation. Now Google needs to revoke that particular key to disallow its usage. This is obvious requirement.
Mojo aims to be this (other language) arguably with easier programming model that rust, familiar syntax to python devs, and a modern design in general. Its stated goal now, is the easiest way to extend python. it provides the same interface for zero-hassle import of .mojo files
Cython and PyBind and Nanobind are good for wrapping an existing library written in C++ and crafting an interface that doesn’t feel like it’s a C++ one. They were a big step from ctypes and SWIG
My understanding is that this new reCAPTCHA is basically just remote attestation.
Remote attestation doesn't use blind signatures (as that would be 'farmable') so tying the device to the 'attestee' is technically possible with collusion of Google servers: EK (static burned-in private key) -> AIK (ephemeral identity key in secure enclave signed by a Google server) -> attestation (signed by AIK). As you can see if the Google server logs EK -> AIK conversions an attestation can be trivially traced to your device's EK. This is also why we don't really see and probably never will see online services which offer fake remote attestations, as it will be pretty obvious that the next step of running such a service is getting Google as a customer and having all your devices blacklisted. Private farms probably won't last long either as I'm sure Google logs everything and will correlate.
Unless something special is done with this new reCAPTCHA not only are you locking internet services behind TPM chips but you are also surrendering anonymity to Google. Unless you acquire untraceable burners for every service, the new reCAPTCHA will be technically capable to tying all your accounts across all these services together. Much like age verification. It may appear that the service would need to cooperate to link the reCAPTCHA session to your registration but the registration time alone will likely be sufficient (the anonymity set will be all but destroyed).
worth noting that google/twitter/facebook/reddit/others colluded to combine sessions, identifiers, so that any person getting identified on any one session / ip would be identified on all
so while this comment is apt, i would ask them what they think of the previous chicxulub impact of the 2012 era collusion - which to this day has not been reported on
(just realized emacs bindings work in comments, nice, no ctrl-x tho)
"Chicxulub impact" seems to be functioning as a bit of hyperbole to imply that this collusion was absolutely devastating, by analogy to the K-T extinction event 66 million years ago.
Not that I really can tell what this was devastating to. Maybe United States v. Apple (2012), where Hachette Book Group, Inc., HarperCollins publishers, Macmillan publishers, Penguin Group, Inc., and Simon & Schuster, Inc. conspired with Apple to raise ebook prices?
I can't say for sure, but is it possible they're referring to the founding of the Internet Association in 2012?[0]
I don't think it's that, because the Wikipedia article makes it seem like it was a force for good, but at the time, it wasn't certain at all that it would be that way.[1]
Beyond that, I'm not exactly sure what might be meant.
By exchanging and correlating data presumably? For example, anything I send or receive on Discord, I see reflected in my YouTube recommendations shortly after. It's downright egregious at times.
Most likely it's just run of the mill Google analytics/adsense tags in discord. Don't forget that discord is web tech and loads all kinds of JS bundles – including trackers. The best solution is to stop using discord, but the second best solution is to only use the web app version of Discord. When you use the web app, you can install adblock and anti-tracking extensions. The amount of data that Discord sends which gets blocked by these extensions is eye opening.
If you run a website, it seems trivial to forward the attestation to someone else by putting the same code up on your website, and getting their device banned from google instead of your own.
The camera isn't the part doing that verification. The google service serving that "reCAPTCHA" is what's doing that validation. Unless you're using a custom browser that is reporting a different domain to google than the one requesting the reCAPTCHA, google's service will know which domain is which.
It would be generated by some other website like Amazon. Because I own, say, Meta, I copy these Amazon-generated codes over to Meta, make people scan them on their phones to sign into Meta and then pass the solution back to Amazon so my bots can sign into Amazon.
We don't yet know how the client side works, perhaps there will be a decompilation posted soon.
It's possible this scenario is acceptable to them because it means they can still tie your access to something that's easier to ban without requiring a full account login.
What are you implying? That it will become ineffective due to that?
That's possible... and they might change their mind if so, we will see.
I feel like it's a similar issue to when scrapers pretend to be an allowed-origin webpage in order to abuse "public" API keys for web services.
They could also require the mobile device to interact with the requesting webpage in some manner, similar to mutual PIN/codes for Bluetooth/TV pairing these days. That way bulk sharing of the codes would still require active participation from the device that requested it in the first place, likely with a short time limit.
Realistically, what Google will do in such a scenario is collect data about the illicit service, enumerate the devices the farm uses and what other activities the devices participate in. What you suggested has far less control over the devices that generate the attestations and it will show.
Also, if the implementation is competently done the phone will show the website for which you scanned the QR code. A user would be able to see whether or not that matches the site where they observed the QR code and proceed accordingly. In time Google will probably integrate it into the Chrome browser where a proxied QR code cannot even be shown.
I'm sure some people still remember how to mentally decode QR codes and verify ECDSA signatures from Covid days. Public transit ticket inspectors in my city also seem to be quite proficient at it :)
Age verification as a technical concept can be done in a privacy-preserving manner! Whether or not we want age verification is another debate, but let's stop making wrong technical claims about that: it doesn't help.
The trick is to define "privacy-preserving age verification" in an extremely narrow way that ignores any other privacy concerns.
For example, imagine you put the same private key into the 'secure element' of every single iphone. You use code signing so that key is only unlocked when the phone is running unmodified iOS with all security updates. You use encryption and remote attestation for the front-facing camera and face id depth sensor. You use NFC to read government-authenticated age and appearance data from biometric passport chips (or digital ID cards) and you store it on-device.
Then, when you want to access pornhub, they send an age challenge to your device, your device makes sure your face matches the stored passport, and if so it signs the challenge with the private key.
Pornhub gets an Apple-signed attestation of age - but because every phone signs with challenges with the same private key, Pornhub can't link it to a particular phone or identity document.
So in a very narrow sense, privacy is preserved.
You can't use someone else's ID, as it checks your face every time. You can't fool it with a photo of the person because of the depth sensor. You can't MITM/replay the camera/depth data because the link is encrypted. You can't substitute software that skips the check with a rooted phone because of the code signing. Security holes can be closed by just pushing a mandatory OS update.
Sure, it doesn't work on PCs. Doesn't work on Linux, or on unlocked/rooted phones. It hands users' government ID documents over to Google and Apple. It requires people to carry foreign-made, battery powered, network connected GPS trackers (with cameras, microphones and speech recognition) with them. And there are non-negotiable terms of service everyone must agree to. But if you define "privacy-preserving" to ignore all that stuff and only consider whether Pornhub learns your identity, it's privacy-preserving.
14 year old me ran into porn on the internet all the time. It didn't turn me into a serial killer.
Meanwhile we let kids have exposure to algorithms that pervert their sense of self worth, get them addicted to dopamine and gambling, and make them feel inferior to their peers.
We have the wrong priorities as a society.
And this bullshit is going to turn us into a completely tracked, monitored, controlled bunch of cattle.
"Think of the children" is the stated reason but not the actual reason. We've seen this pattern so many times that it's perplexing that people continue to fall for it.
If the children were the actual reason there are much less invasive solutions that enable reliable parental controls such as mandating self classification of content and fining service operators for inaccuracies.
Think for yourself and consider what the possible ulterior motives might be.
> Sure, and in the meantime try to think and read about how privacy-preserving age verification actually works.
This requires you build a whole apparatus around controlling what people can see, say, and do.
The concept of "slippery slope" is often called a logical fallacy, but in reality it's more than often not a fallacy at all. It's the manner in which you boil the frog.
I think it's something like over 50% of adults do not have kids now. Why should we put the majority of people - for the majority of their lives - at risk for a mere 20% of the population to "not see boobs", when good parenting will suffice?
Let's not put a cage around our freedoms. Let's ask parents to be more responsible. In the edge cases where that isn't sufficient, is that really as bad as what could happen to all of our liberties should we go down that path?
We're burning down the whole village because someone saw a cockroach.
That key will get leaked. A key that has to go into every phone, even if done at the manufacturer and onto the TPM chip, will get out.
Also even if it doesn't get leaked directly, the security of TPM chips is not absolute. Secrets from them can theoretically be extracted given an attacker with sufficient means and motivation. Normally nothing that's on a typical TPM chip would warrant a project of that magnitude, but a widely used private key can change that equation.
Plus a TPM chip doesn't really have means to tell the phone isn't being lied to. You could swap out the actual phone camera hardware and sensors for a custom board that feeds the entire phone camera data of your choosing and it would be none-the-wiser.
Maybe? But biometric passports, chip-and-pin payment cards and SIM cards seem to do reasonably well. And Apple can always push out a mandatory software update that rotates the key, if they need to.
> You could swap out the actual phone camera hardware and sensors for a custom board that feeds the entire phone camera data of your choosing and it would be none-the-wiser.
Apple's 'TrueDepth' cameras are serialised and paired with the rest of the device. The touch ID sensors were before that too.
I don't know the precise details, but reports from people trying to repair devices independently of Apple are that the phone is very much the wiser.
The app[1] on the user's device[2] forwards that request to the chip on the user's ID card. The user authorizes themselves with their 6 digit PIN stored on the card.
The chip produces a signed reply containing the following payload fields: `issuing_country:string` and `over_18:bool`
What happens when I set up a tor hidden service that (in conjunction with some client software) stands in for a visitor's device and will proxy any requests back to my personal card? After all the payloads are anonymous so what's the risk to me?
To prevent this sort of abuse, the server would have to request the `pseudonym` field, which contains a hash across the server identity and the card's secret salt, allowing the server to detect abuse but not to track the user across multiple services.
It's probably even simpler than that: say normal users make a few requests once in a while (because they don't need thousands of tokens every day), and one user makes a ton of requests, then it is an indication that this user may be abusing the system.
It would probably be possible to use the service that the parent is suggesting and try to link it to requests to the server based on timing. But I don't even know if anyone would bother trying to identify the OP: probably it would just be enough to rate-limit the requests.
As always: it's easy to criticise, harder to actually get it right.
Parental controls are intentionally gimped. They do the bare minimum while providing more than enough wiggle room for a tech savvy teenager. To implement a robust parental control scheme you need network level filtration which isn't something the average parent will know anything about.
I disagree with that, because the teenager should be the parent's responsibility, regardless of how smart or savvy they are. Parents should be talking to their children, communicating what their and society's expectations are. If the parents are attempting to exert technical control over their children, by home router for example, there should be websites or computer shops they can go to. If the parents don't care or are not smart enough to keep up with their teenager, then no type of state mandated gimmick will either.
Teenagers, at that level of intelligence or are that determined, will find ways to circumvent whatever control mechanisms a parent or school is attempting to use. At some point, it is a matter of the teenager respecting their parents and rules. Same for if you told a teenager do not drink and drive. You can setup all kinds of technical barriers to block drunk teenagers from driving, but if they are that "smart", those committed to bad behavior or law breaking will find ways.
They would be a solution if almost all parents used them, but parents don't want to socially isolate their kids since a lot of "social" activity is now on social media. It's kind of a prisoner's dilemma.
There's not necessarily wrong. Despite the vapid and damaging nature of most popular online media, isolating a child from it might have even worse social consequences when their real-life peer groups discover that they're not on social media or that their parents have neutered their phone. Some kids would turn out fine after that. Others would be socially destroyed for life (maybe with the right therapy they could become well-adjusted, but high quality therapy is rare).
> They would be a solution if almost all parents used them
No, they are a solution for parents who want to use them, and that's all they should be. Their existence demonstrates that it's possible to handle this without regulation, other than the desire of some people to inflict their preferences onto other people's kids.
You haven't tried to use parental controls much have you? They are all terrible. They are insanely difficult to get set up properly and even when you do there are a lot of tradeoffs that come with it.
> even when you do there are a lot of tradeoffs that come with it
Absolutely, but those are nothing compared to the tradeoffs of putting attestation or identity verification (sometimes incorrectly described as "age" verification) on numerous sites and inflicting them on everyone.
And my whole point is that it's possible to do age verification in a privacy-preserving manner, and before complaining about the tradeoffs, you should get informed about what they are.
I'm well aware of those possibilities. The two biggest problems with them are that 1) they still apply to everyone, rather than only to those who opt into them and 2) governments and companies are in practice going to push for the versions that identify people and provide more information.
If you make it possible for governments to decide what content is "limited to adults", they can and will abuse that capability. "Porn" is the battle cry, to make it uncomfortable to argue against; often, other information the government wants to restrict becomes a target. The only way to prevent that is to deny the capability in the first place.
Yep, I think this would be a totally valid debate. But my frustration is that it's not there at all. We're at "people make it sound like it's technologically impossible, like the ChatControl for E2EE".
It feels like trying to debate about whether 5G is good or not, and the debate is stuck at people claiming that 5G boils your blood. There are valid reasons to oppose 5G, but if people choose to be so wrong that it sounds like bad faith, they surely won't convince me of anything.
I have yet to see a scheme that would robustly preserve privacy and freedom floated by any of the major efforts. I think the onus is on you to present a workable scheme, but even then I'm not going to support the major efforts which at present are malicious.
Having Privacy in the name doesn't mean it's actually privacy preserving. You can't just ignore attack vectors like collusion between signing entities and websites.
Did you read about how it works? Can you precisely describe an attack that defeats it, or are you just throwing names you've heard without actually knowing how Privacy Pass works? Sounds like the latter to me (yes, I read the RFC).
Your tone isn't appropriate. You don't get to assign reading. If you want to convince people of something then clearly state your case. In this instance that would mean outlining the technical argument.
That said, you've got blinders on. You're all over this comment section condescending to people about a particularly clever scheme without considering the various real world objections being raised. Not the least of which is that the vast majority of the tidalwave of legislation on the topic has zero to do with ZKPs.
Parental controls can set browsers in "child mode" where the browser sends an "I am a child" header to the server and social networks etc. need to honour it. This has existed for twelve years already: https://blog.mozilla.org/netpolicy/2014/07/22/prefersafe-mak... . It can probably be amended with a more granular set of levels, but that would be the best way forward.
The problem of "parents are negligent" is also solved by existing laws which have fines for parents who are negligent towards their children, and governments absolutely love collecting fines, so all the incentives are properly aligned.
And it's possible to do age verification in a privacy-preserving manner. I'm tired of repeating it, people should get informed before they complain.
We could totally discuss whether or not privacy-preserving age verification is a good thing. But we can't, because most people can't be arsed to read about what age verification implies, and complain about something that is fundamentally wrong (i.e. that they would have to surrender their anonymity).
How about we just ban entirely the harmful social media that we would need to attach all our IDs to our internet activity in order to protect the children? Very strange that that's not part of the discussion!
Joe can walk into an Apple store (or wherever they purchased the device) and ask them to enable parental controls on it. We have people whose job it is to service computers and phones, they have been around for more than half a century. I am pretty sure most Joes don't service their cars either, yet they keep them road legal by visiting trained mechanics.
It doesn't provide 100% privacy from everyone, but it does provide privacy from the web service: A worker at a physical store checks your ID, and if it says you are 18, they hand you a token with a unique key on it, which they have a stack of behind the counter. You put the unique key into the web service. It's not necessarily one time use, but if you don't want to risk correlation, you can use each one only once. It's just like alcohol sales, and has all the same failure modes as alcohol sales, but if it's good enough for alcohol sales it's good enough for web services.
Well it probably needs a bit more complexity to avoid being trivially broken. Codes are one time use; the service has them attested by the token provider behind the scenes, and the provider is in turn under contract with the government. Tokens are also activated at the point of purchase similar to gift cards in order to prevent bulk theft and resale. A law in the vein of HIPAA prevents collusion between the retail establishment and the token provider.
>> A law in the vein of HIPAA prevents collusion
>
> No need if you use cryptography.
True for age verification, but not true in general. If you have something that can be used illegally, it's very handy to allow firms to rent / hire it out anyway but make the hirer responsible for any illegal activity.
An example is hiring a car, and the car is used to ram-raid a shop. Today this is solved by handing over a government ID to the rental company. Commit a crime in the car and they hand that over to police, but it has the sad side effect of handing over information to the car rental they can use to track you, and worse sell to others.
Using a zero knowledge proof for a valid driver's licence fixes the privacy problem, but at the expense of the hire company not being able to transfer responsibility for illegal activity onto the hirer. I suspect if that happened no one would hire out cars any more.
You can easily design something that is Zero Knowledge to the car hire firm, but includes an opaque token they can hand over to the government on lawful demand. It contains all the details needed to pursue the law breaking hirer. Thus there is still a role for the law here - you can't always do everything with crypto.
This is a very minor quibble - I agree completely with what I think is your main point. This Google change is a privacy disaster. It's a step towards an enshittified internet with the gateways onto it controlled by a few big tech firms.
But I don't think just yelling "just use ZK" is helpful. It's much harder than that - ZK is only part of the puzzle. Passkeys are currently caught up in the same attestation trap, and there is no workable solution in the offing. Banks and other high trust applications need some assurance your FIDO private key is being handled securely. The solutions on the table are Apple not doing attestation, or Google who does at the low low price of selling your true name to Google. Both "solutions" suck, horribly.
ZK proofs of things like licences and age have to solve the attestation problem, and solve extra stuff as well. I'm not holding my breath.
> But I don't think just yelling "just use ZK" is helpful.
Agreed. I am just very frustrated, because I feel it is an important topic. And I wish I saw adult discussions about it. And instead, people who claim to be "tech-savvy" keep whining about the fact that it will fundamentally leak their ID everywhere. Like they somehow understood the point for E2EE, and repeat it here confidently. If tech-savvy people can't be bothered to understand how this works, why should politicians?
I have the same frustration with the anti-5G crowd yelling that it will boil your blood. There are many valid reasons to criticise 5G and have a constructive debate, but they choose to be wrong anyway.
> If tech-savvy people can't be bothered to understand how this works
You underestimate your own abilities. Tech savvy doesn't mean they think much about crypto.
To get a feel for this I asked Gemini "If you were to survey a group of people who would be called "Tech Savvy", what percentage of them would be aware you could construct a zero knowledge proof for a person's age that revealed nothing beyond they were older than a given threshold?". The answer was 5%..10%. That rises to a surprising low 20%..30% for Software Engineers. It's only once you get to Software Engineers who write security systems that you get above 50%.
Gemini didn't give any references so those figures could be complete rubbish, but in my experience they seem on the high side. Many very experienced engineers I interact with clearly have not thought very deeply about how crypto systems interact with human trust. Granted understanding the implications of crypto is yet another step beyond understanding the maths, but I'm amazed at how many technology curious people haven't bothered to take that step.
The good pollies on the other hand probably have a very good intuitive feel for human trust systems and how to navigate them. They rely on engineers to tell them what is possible of course, and they won't care about the details. But what they will care about is whether the engineers can deliver the system they promised, and there I have to admit our track record is appalling. How many government IT initiatives have you seen deliver what was promised on time and on budget? So when you tell them you can build a ZK system that delivers in all these privacy promises, expect a very sceptical reception.
You can prove your signature is from a key which is in a member of an acceptable set without revealing which one. These schemes can also prevent excessive reuse, e.g. by you also proving that some linked value is a hashlike function of your private key, the date, and the domain, so if you sign multiple times for the same site in the same day your uses are linked, so someone can't just toss up an oracle that gives endless authentications.
Such systems are deployed in production by privacy preserving cryptocurrencies as its the same problem: Prove you're spending a coin that exists without revealing information about which one, and prove that you're not spending it multiple times.
Less private but easier to implement is just simple blind signing. Site asks you to give them a signature of their domain name, your account name, and date. You blind the data using a random number, go to google and identify yourself (e.g. solve a CAPTCHA, check your mobile device, age verify, whatever) and ask them to sign the blinded value-- they rate limit you and give you a signature. You unblind and provide to the site. Now the site knows you passed the google rate limit but nothing else, but google never learns what site you authenticated to.
The blindsigning approach is kinda lame because it requires active communication with a third party that learns you're online and authenticating to stuff. So I think it's generally less preferred but the cryptography is hardly any more complicated than an ordinary digital signature.
Ring cryptography does this - given a public key and a set of private keys you can attest that one of the keys signed it but not which one. This lets both Google and you generate a signature and say “this is attested”, without the person verifying it knowing _who_ signed it.
You likely need one other step beyond a plain ring signature, often called a linkable ring signature. If you use only a plain ring signature I could get one authenticated key and setup a site that gives away an unlimited number of access tokens with it, and you can't identify which key is doing so in order to kick it out.
A linkable ring signature lets you correlate multiple usage but only if they share a common 'context value'. Intelligent selection of the context value results in abusive use inevitably sharing a context so you can exclude or rate limit it, but honest use tends to not share a context so the privacy is preserved.
All states/governments have basic records on their citizens and residents, including at least a name, dob, address, etc, at least for a passport, driver's license, if not an actual id card. Let's assume this is acceptable.
Then it's technically possible (and really not that difficult) for states to provide a service that issues zero-knowledge proofs of facts like "age > X".
(partly off-topic rant) One can argue this is a false premise fallacy. For most of the time states did not have this information about their citizens and the world progressed quite nicely. The only argument to know stuff about citizens that don't drive (increasing numbers) nor travel abroad (different problem altogether) is to tax them?
One of the foundational differences between humans and cattle was you cannot brand (https://en.wikipedia.org/wiki/Livestock_branding) humans. Not physically, because we do it digitally and I see a slippery slope.
The discussion was about age verification, not about the (rather more extreme) position that it's illegitimate for the state to hold information about its citizens.
> For most of the time states did not have this information about their citizens and the world progressed quite nicely.
This is quite untrue. State bureaucracies far predate the modern era.
The problem is that while you might be able to trust the crypto, the government won't trust you to do the crypto entirely by yourself. And this introduces avenues for deanonymisation. Moreover, collusion between the government and the entity making the age check can also theoretically deanonimize.
It's a complicated problem.
We continue to seek a technological solution to a parenting problem.
I feel like it becomes bad faith at some point. With a sufficiently advanced attack, you can be personally identified today. ZKP for age verification does not make this worse, does it?
It's a bit like saying "no but Signal is not really encrypted, because the government can extract some metadata by looking at the network around the server".
Look at Apple’s PAT: the website knows the service that did the attestation, but not the user. The service knows the user, but not the website. If you controlled both you can link the user, but otherwise you can’t.
As far as I know no currently proposed age verification method does this in practice.
The only way to implement truly privacy preserving age verification is through zero knowledge proofs (or blind signatures) but what that would allow is undetectable token forging.
The EU's proposed system uses ZK proof. You get a PGP signed message from "someone" who knows your identity (government or private agency) then store it on your phone to pass to websites that need your age. It does have an obvious flaw in that whoever you give the token to has no proof it's actually yours.
> It does have an obvious flaw in that whoever you give the token to has no proof it's actually yours.
Which isn't necessarily a flaw, depends on the threat model. For actual age verification that we care about (e.g. make it harder for kids to access social media), it may be good enough.
No it can't. If it's done in a truly privacy preserving way then someone can also sell a fake age verification service making the whole thing meaningless.
I don't see any requirement to support hardware attestation in the recaptcha documentation, the Play Services seem to be "enough".
I think it's most likely to be attested by Google remotely; they might be using an app (with enormous access to the phone as the Play Services have) to be able to link a ton of data together, possibly including the local activity on the phone, officially to make better humanity assessments based on it all.
For people using a Google account it probably won't make a huge difference, in terms of data collected.
If that's how it would work, spoofing would probably be theoretically possible, but it would be easy for Google to detect attestations used by multiple people.
Let's not forget that this is an update to a very approximate system, absolute security is not (yet) required.
But there's a good chance that it will be extremely hard to sidestep, despite that.
> they might be using an app (with enormous access to the phone as the Play Services have) to be able to link a ton of data together, possibly including the local activity on the phone
But anything your phone can possibly do in software can be spoofed, so how would that help?
No, Play Integrity is a set of numerous features, and the developers decide which one to use, and how to react to what the api reports.
Hardware attestation is one feature, but it's still not used a lot.
The most common feature is the check that your Google account really downloaded the app you're using (and that the app wasn't modified); which requires using a Google account, of course. This is what the "pairip" that's been plaguing the store for a year does (it's being added by a ton of apps because adding it only requires enabling a preference in the Play Console).
> having all your devices blacklisted. Private farms probably won't last long either as I'm sure Google logs everything and will correlate.
So basically Google can now ban your device from being able to access a huge portion of the internet, in addition to nuking any online presence connected to them.
You could wake up one day and find your device blacklisted from the internet, with no chance of ever reaching customer support. What a lovely future
That's great until it's some essential government, medical, educational, etc. service that you have either no alternative to or no alternative that isn't also using the same thing. I'm already being slowly and incrementally softlocked out of some (fortunately non-essential so far) sites either by cloudflare or other more subtle "anti-bot" networks as time goes on, including some like I've listed above. I can only expect this will continue until it's something I can't avoid.
For some reason, I'm softlocked from booking tickets from Deutsche Bahn. The website errors out with a cryptic "Your browser's behavior resembles that of a bot." message with no option to try again or pass a captcha or whatever. The website itself described several possible solutions but none helped (I tried using different computers, different internet connections, even a phone connected to internet using a SIM from a different country).
As for now, when I need to travel to Germany, I just book tickets through the national carrier of my home country, which for cross-border tickets often turns out to actually be cheaper than booking through DB. Thankfully I don't live in Germany proper and my need for travel there is not that high (once or twice a year at most) but I wonder what would I do if I had to move to Germany and use trains there more often.
Same problem but with French equivalent SNCF (sncf-connect.com). I just checked and can confirm nothing has changed. You cannot use up-to-date Firefox on Linux to access the main booking site for French rail tickets.
Access is temporarily restricted
We detected unusual activity from your device or network.
Reasons may include:
-Rapid taps or clicks
-JavaScript disabled or not working
-Automated (bot) activity on your network (IP X.X.X.X)
-Use of developer or inspection tools
I just opened the developer tools, then chose 'Separate Window' from the menu. The developer tools are now on my other screen, and then I clicked Reply to your message. The developer tools window that I had open is not relating to this tab, but when I opened Developer Tools for this tab, it remembered that I wanted it in a separate window and did so again. The viewport should not have changed at all..?
No, it won't, and this mechanism should not be used by anyone, but it'd at least ensure that people aren't forced to use it to interact with their government.
With the new reCAPTCHA this is going to happen because most human visitors will actually be unable to pass the CAPTCHA. It will be interesting to see whether this makes websites ditch reCAPTCHA or whether they literally just don't care about having customers, an attitude that seems to be getting more and more common every day.
I have been unable to give my money to Home Depot, REI and a growing list of online retailers because they use Akamai EdgeSuite, which just assumes I am a bot and 403s on protected API calls. This happens consistently on any IP and any browser on my Linux desktop/laptop.
There are not enough words to describe how much I hate Akamai EdgeSuite. So many random validation loops and 403s across different physical computers, different operating systems, different connections and even countries. A couple of services I need use it and it's 30% I'll make it past their stupid "protection".
It has a zero percent chance of reaching anyone who can do anything about it.
You could try handwriting and posting a letter to their CEO. I think that sometimes works. Probably not very often but there are more than zero CEOs who read those letters.
You can also send an email if you're lazy. In both cases the CEO probably won't read it but a more than minimum wage secretary probably will pass it on to corporate customer support which IME is a lot more useful and the regular support that the company wants you to use.
Maybe they'll figure it out when their revenue drops next quorter or the ones after that?
I was thinking in the same terms: you put up a QR capcha, you don't get my traffic and money. Just the amount of extra work needed, let alone the Google tracking turns me off. As if traffic lights, crosswalks and bridges weren't enough of a hassle.
REI Co-op has an Annual Members Meeting in Seattle, where it announces the results of the board of directors election.
The 2026 one happened Feb 5. Apparently the presentation is only 8m long, some saying it's pre-recorded and it's near-impossible for members to submit a question that actually gets answered:
One problem with these things is that businesses have minimal visibility on the amount of users they lose.
On the opposite, if they see reports of many visitors not completing the captcha, they're likely to think "Wow so many bots!!! This defense nowadays is indispensable..!".
Sometimes you need to pass a captcha even to contact them (if you want to tell them that you can't pass their captcha).
I wanted to give money to charity and they have whole form protected by recaptcha. So I would have to allow all my personal information and amount donated sent to google (and agree with google terms for data processing). I have contacted them but they did not understand why this is problem they just wanted to protect themself against bots. IMHO unless those things are not disallowed by antitrust laws we have lost.
I suspect this is a real problem for charities, though. If those bots are using stolen credit cards, the "donations" are going to cost the charities money after they pay extra fees to the credit card processors. Nonprofits are sometimes used to test stolen credit cards before making more profitable fraudulent transactions, so there's a real risk of it costing them money if they get rid of the captcha but don't replace it with something sufficiently high quality, even after accounting for the occasional lost donation.
Merchants often pay a chargeback fee on top of refunding the main charge. Additionally, merchants with lots of fraud or other chargeback issues are likely to be dropped by payment processors or see their general fees with payment processors get more expensive.
> most human visitors will actually be unable to pass the CAPTCHA
Most human visitors will never ever notice the change. reCAPTCHA is completely invisible for most human visitors because they are allowed to pass just by fingerprint.
It's not like an average user is going to have to scan a QR code every time they visit a site via web browser. If it were like this then it would be a non-issue because no sane website would adopt this system. But it isn't.
This is not true, maybe in the US, but in many countries you get captchas all the time with residential connection and also in public places all the time, internet cafe, airports, cafe wifis and so, they'll at least get it once, that way there is a permanent fingerprint correlation with real identity, I can bet that EVERYBODY will get it at some point so Google and other people on board with this atrocity (webmasters are also accomplice) can finish-up the master plan.
>> whether they literally just don't care about having customers
So every government website. Every website where people simply have no choice (DMV) or where failure to login results in them not claiming the money/benefits they are due (all tax websites). And every website handling post-sale complaints (Airlines, insurance).
> Stop visiting sites and using services that use reCAPTCHA. Problem solved.
Not solved at all: 99.999% of users don't give a damn and use a Google-signed Android.
My opinion is that because they don't give a damn does NOT mean regulations should not protect them. What Google is doing here is anticompetitive and they should be fined (antitrust and all that).
I don't see the correlation with Google-signed android actually, people really want to have this friction when they visit a website? Like having to get your phone from another room, use camera and all that to access a website? This is so anti-pattern and is also disrespectful toward consumers, any webmaster participating into this imo should rethink his career and morality.
There is, but at least in the US neither party cares. They want to get rid of anonymity online, one to throw anyone who googles "trans" in jail, and the other because their biggest donors are tech companies that want to denonymize everyone.
Our antitrust laws have been toothless for decades, and both parties love billionaires controlling the rest of us with an iron fist.
GrapheneOS is looking more and more worth the headache that my limited free time generally does not like. I don't need Google to know my smut fanfiction is written by my IRL.
Felt same way about GrapheneOS but a few friends set it up so i gave it a try. It is easy to install and use. As evidence, I gave my 70 year old father one and he loves it.
When my friend was telling me about GrapheneOS I was thinking back to the old days of android custom roms, all the bugs and bullshit, the time I couldn't dial out to 911 because my custom ROM crashes when I did, or other issues. So I gave it a pass.
However he's been on it now for months and every time he shows me something on it I get a little more jealous. Everything seems to be working fine, including e.g. bank apps, and he has interesting features like some kind of app zoning thing limiting permissions on a zone to zone basis.
The only problem is it's only available on massive phones without headphone jacks and SD card slots, so I'm sticking with Xperia for now.
> Ask HN: Did HN just start using Google recaptcha for logins? [0]
> dang
> No recent changes, but we do sometimes turn captchas on for logins when HN is under some kind of (possible) attack or other. That's been happening for a few hours. Hopefully it goes away soon.
At least in my country (Poland) you should be able to make a pretty bug fuss and resulting in them fixing it, if indeed one of ego services made you leak all your data to Google.
The other problem with this is that there are few CAPTCHA alternatives.
CF turnstile is one, but of course that means Cloudflare owns even more of the web.
HCaptcha is inaccessible and actively discriminatory against individuals with disabilities and refuses to change, to the point that I suspect the only way that they will do anything is to file a class-action against them and sue them into the ground.
And I... Can't think of anything else. Other than to just get rid of Captchas entirely.
You could just have a custom one that asks domain-specific questions (and ones which will trip up LLMs are not hard to come by.) I've seen a few forums ask such questions for registration, long before the rise of LLMs.
There are other captcha alternatives like Turnstile, for example Private Captcha, Altcha etc. - they are owned by mostly “small” independent companies, they are not visual captchas (proof-of-work based) and very accesssible.
Compliance is what makes all that shit possible. Sadly most people are compliant and made so by gradually increasing their dependency on "commodities" which really are anchors to a shit lake.
Suddenly I have been made aware that, having lost my paddle on Shit Creek, I will eventually be taken downstream to Shit Lake (where it appears I will inevitably drop anchor).
Oh just wait, the AI phone service on their side will be more than happy to complete your device attestation key challenge by touch tone. We have to make sure you are still you after all!
But in all seriousness, many services are making it difficult through to impossible to communicate outside of their web or app platforms. Call centres are expensive and messy, and it's now apparently acceptable as a society to treat customers/clients/whatever as adversaries so they can get away with making it hard to communicate with them.
I was unable to book a doctors meeting through the clinic's website, so I declared "screw tech" and called their call center, which still worked better. The app just searched for the "first available spot" and never found anything. If they axe the call center I'm going to have to go to their place.
> Are you comfortable with anybody being able to ring up the hospital and say "yo, it's majorchord, how are my gonnorhea results?"
No, that's why we have safety protocols in place. When you call a doctor they ask you for your birthdate or sometimes also a PIN/password on your account to protect your data.
How would that still be considered a breach of privacy?
Alright. I didn't know that. "Just call them" did not sound like it included any kind of authentication procedure.
But giving birthdate (available to anyone via a single query in a public database) and (sometimes?! - what?!) PIN over the phone wouldn't really be considered good enough here. Birthdate is, as I said, public knowledge. And a phone is too insecure a medium for transmitting a password.
I'm not super interested in an long argument about whether it's reasonable that this isn't considered secure or not. I'm just letting you know what reality looks like. And the reality is that "just call them" is not a solution, because such information will simply not be handed out over the phone.
Why is every startup using that same Serif font now, Garamond or whatever. Is it an LLM design phenomenon? Its kinda ruining that font style for me.
Also $1,500 a month for 10 "influencers" is wild. This doesn't seem that sophisticated unless they're doing something special to increase trust scores of accounts. They say they have "in house warming algorithm" which honestly doesn't inspire confidence for me.
Whats funny is its almost a certainty (if they are doing things correctly) that they have literal farms of phones (probably in SEA). The only real way to keep trust high is to have a real mobile connection and unique devices. Proxies are okay, but you really need to use the apps on real hardware.
Interesting article, thanks. I've done a bit of small scale phone farming (for my own cheap mobile proxies). In all reality the phones aren't that expensive, I went with Moto 5gs that cost $130 (retail), so in their case the phones pay for themselves in the first month.
Probably a decent amount of compute cost for video generation, but I'm sure they have access to free compute and inference for being in bed with a16z.
How is this not grounds to be sued into oblivion by Google and Meta? They clearly violate ToS for profit. This is something I expect to find on a dark web forum where 0days are traded, not in public.
> How is this not grounds to be sued into oblivion by Google and Meta?
Because they don't care. It doesn't matter that it's AI slop, it generates views. And Google and Meta can bill advertisers for those views.
Zuckerberg is paying people to put AI slop Shrimp Jesus on facebook. (Not directly to platforms like this, but with the incentive structure)
Really, they're not just cashing in on the views of AI slop being put in front of boomers. They're cashing both ways; While the low end spam industry is merely guessing and iterating on whatever generates views, the more refined spammer does not leave the performance of their latest slop post up to chance, and just uses good old viewbotting. Viewbotting that these days, is mostly done on real devices. Which show ads, to the bots or underpaid developing world workers. Google and Meta'll still charge you for those impressions though.
The losers? People who sincerely try to use these platforms, and whatever idiot businesses are still paying for ads by the impression or click, rather than conversions that immediately generate revenue.
This kind of thing has been common for ages. Obviously AI has kicked it into overdrive, but it’s not darkweb kind of stuff.
Note that they do not mention any specific companies on that landing page. That is pretty intentional.
But realistically going after bots is expensive and rarely successful, so most companies don’t do it. Even if you find the guy, the chances they can be legally reached are pretty low.
It could be contextual, as in each user gets one anonymous id per domain name per day. Multiple uses by the same user at the same domain in the same day are linked.
But much of the purpose of these systems is to violate the public's privacy and exert as much surveillance and control as possible. If not for that schemes that mitigate the privacy loss would be a top priority.
Apple has their own remote attestation infrastructure and you will not be able to impersonate an Apple device without extracting private key material from the secure enclave of a legitimate Apple device or compromising Apple certificate authority private keys.
In the UK, the Department of Education guidance is that schools should be mobile-phone free. Students use computers to access the web fairly regularly. Guess that would be problematic then, since many schools policies is that mobile phones should be turned off and stored in your bag during the day.
> Recital 49 - Network and Information Security as Overriding Legitimate Interest
> The processing of personal data to the extent strictly necessary and proportionate for the purposes of ensuring network and information security, i.e. the ability of a network or an information system to resist, at a given level of confidence, accidental events or unlawful or malicious actions that compromise the availability, authenticity, integrity and confidentiality of stored or transmitted personal data, and the security of the related services offered by, or accessible via, those networks and systems,...
It's funny how people after all this time think 99 Articles, 173 Recitals and a huge tech lobby equals a water-tight, pro-citizen, impenetrable privacy law with almost no exemptions.
Training yourself to remember dreams by writing them down before they fade away is paramount, it's not enough to just think about them - they still somehow fade away along with your thoughts about them. Then read what you wrote before going to sleep again.
If you want to achieve lucid dreaming consistently you also have to develop a habit of doing reality checks. The most effective one is to pinch your nose and try to breath through it, in your dreams it will almost always work and the surprise is major.
Checking clocks for consistency. Text as well. They are less reliable. Some people swear by rotating a text containing object upside down and see if the text auto-rotates, apparently it does in their dreams. Some people can't read anything in their dreams.
It seems unlikely that a true Zero Knowledge Proof system for things like age verification would ever be allowed.
Also, remote attestation doesn't work that way and for good reason. Under a true ZKP system, a single defector (extracted/leaked/etc key) would be able to generate an infinite number of false attestations without detection.
> It seems unlikely that a true Zero Knowledge Proof system for things like age verification would ever be allowed
This article is about EU age verification which is specifically and definitely stated as using zero knowledge proof in all technical docs that I've seen:
In that case Google play integrity cannot be used.
It certifies devices running on Oreo (because vendor didn't provide updates),meaning there are almost infinite vulnerabilities that will allow to leak the keys.
> The training doesn't evaluate "is the answer true" or "is the answer useful." It's either "is the answer likely to appear in the training corpus" or "is the RLHF judge happy with the answer." We are optimising LLMs to produce output which looks like high quality output.
It's not quite as dire as this. One of the main reasons why LLM's are getting better over time is that they are used themselves to bootstrap the next generation by sifting through the training set to do 'various things' to it.
People often forget that the training corpus contains everything humanity ever produced and anything new humanity will produce will likely come from it as well. Torturing it with current generation models is among the most productive things you can do to improve the next generation systems.
Comparing Deep Learning with neuroscience may turn out to be erroneous. They may be orthogonal.
The brain likely has more in common with Reservoir Computing (sans the actual learning algorithm) than Deep Learning.
Deep Learning relies on end to end loss optimization, something which is much more powerful than anything the brain can be doing. But the end-to-end limitation is restricting, credit assignment is a big problem.
Consider how crazy the generative diffusion models are, we generate the output in its entirety with a fixed number of steps - the complexity of the output is irrelevant. If only we could train a model to just use Photoshop directly, but we can't.
Interestingly, there are some attempts at a middle ground where a variable number of continuous variables describe an image: <https://visual-gen.github.io/semanticist/>
If you think a 2 year old is doing deep learning, you're probably wrong.
But if you think natural selection was providing end to end loss optimization, you might be closer to right. An _awful lot_ of our brain structure and connectivity is born, vs learned, and that goes for Mice and Men.
Why not both? A pre-trained LLM has an awful lot of structure, and during SFT, we're still doing deep learning to teach it further. Innate structure doesn't preclude deep learning at all.
There's an entire line of work that goes "brain is trying to approximate backprop with local rules, poorly", with some interesting findings to back it.
Now, it seems unlikely that the brain has a single neat "loss function" that could account for all of learning behaviors across it. But that doesn't preclude deep learning either. If the brain's "loss" is an interplay of many local and global objectives of varying complexity, it can be still a deep learning system at its core. Still doing a form of gradient descent, with non-backpropagation credit assignment and all. Just not the kind of deep learning system any sane engineer would design.
I don't know what you mean by end to end loss optimization in particular, but if you mean something that involves global propagation of errors e.g. backpropagation you are dead wrong.
Predictive coding is more biologically plausible because it uses local information from neighbouring neurons only.
Modern systems like Nano Banana 2 and ChatGPT Images 2.0 are very close to "just use Photoshop directly" in concept, if not in execution.
They seem to use an agentic LLM with image inputs and outputs to produce, verify, refine and compose visual artifacts. Those operations appear to be learned functions, however, not an external tool like Photoshop.
This allows for "variable depth" in practice. Composition uses previous images, which may have been generated from scratch, or from previous images.
> If only we could train a model to just use Photoshop directly, but we can't.
It is probably coming, I get the impression - just from following the trend of the progress - that internal world models are the hardest part. I was playing with Gemma 4 and it seemed to have a remarkable amount of trouble with the idea of going from its house to another house, collecting something and returning; starting part-way through where it was already at house #2. It figured it out but it seemed to be working very hard with the concept to a degree that was really a bit comical.
It looks like that issue is solving itself as text & image models start to unify and they get more video-based data that makes the object-oriented nature of physical reality obvious. Understanding spatial layouts seems like it might be a prerequisite to being able to consistently set up a scene in Photoshop. It is a bit weird that it seems pulling an image fully formed from the aether is statistically easier than putting it together piece by piece.
> If only we could train a model to just use Photoshop directly, but we can't.
They're obviously more general purpose but LLMs can also be used to drive external graphics programs. A relatively popular one is Blender MCP [1], which lets an LLM control Blender to build and scaffold out 3D models.
The frequency of fireballs in our planet’s skies seemed to grow in recent months. NASA and other meteor experts can’t agree on what explains it.
...
In response to growing public interest, a NASA public affairs official said in a blog post at the end of March, “While it may seem like meteor reports and sightings have been more frequent recently, it is not out of the ordinary.” The post explained that from February to April, there is often a 10 to 30 percent increase in the number of extremely luminous meteors — and nobody is quite sure why.
Mr. Hankey said that this 10 to 30 percent increase was already baked into the American Meteor Society tally, and that it doesn’t explain the apparent doubling of fireball sightings in the year’s first quarter.
Can you, please, also quote how this sightings are tallied?
Is that an astronomical observation by same people or is that based on self-reporting citizens?
"People see more stuff in the sky" is a common sign for people getting more anxious about attacks from the sky. To my knowledge, first UFO reporting waves happened during cold war when people started to get paranoid about soviet spying.
> The frequency of fireballs in our planet’s skies seemed to grow in recent months.
It feels reductive to point out that this has coincided with a massive increase in the number of small satellites with limited lifespans up there.
(And yes, you'd expect NASA and the AMS to have thought of that but I honestly wouldn't put it past them to be deliberately ignoring Starlink satellites given Musk's political power and petulance to people who cross him.)
This is more or less what happens. These models are tuned with reinforcement learning from human feedback (RLHF). Humans give them feedback that this type of language is good.
The notorious "it's not X, it's Y" pattern is somewhat rare from actual humans, but it's catnip for the humans providing the feedback.
They do not use zero knowledge proof systems or blind signatures. So every time you use your device to attest you leave behind something (the attestation packet) that can be used to link the action to your device. They put on a show about how much they care about your privacy by introducing indirection into the process (static device 'ID' is used to acquire an ephemeral 'ID' from an intermediate server) but it's just a show because you don't know what those intermediary severs are doing: You should assume they log everything.
And this just the remote attestation vector, the DRM 'ID' vector is even worse (no meaningful indirection, every license server has access to your burned-in-silicon static identity). And the Google account vector is what it is.
Using blind signatures for remote attestation has actually been proposed, but no one notable is currently using it: <https://en.wikipedia.org/wiki/Direct_Anonymous_Attestation>
There are several possible reasons for this, the obvious one is that they want to be able to violate your privacy at will or are mandated to have the capability. The other is that because it's not possible to link an attestation to a particular device the only mitigation to abuse that is feasible is rate limiting which may not be good enough for them - an adversary could set up a farm where every device generates $/hour from providing remote attestations to 'malicious' actors.
reply