This is a bit unrelated to the harder crypto stuff but my mom called me freaking out she couldn’t get into her gmail. It turns out Google auto registered her new android phone with a passkey and made that the default Google login with a confusing passkey based interface (expecting her to know to click the second option to login via password or understand wtf a passkey is was too much IMO).
It turns out when it said “passkey sent to android” the android never got any notification and I couldn’t figure it out after half an hour. You can’t even delete the auto registered passkeys. Nor turn off the default auth flow.
Terrible UX by Google. I’m assuming it’s because her phone is some budget Samsung with a bastardized Android. Trusting those devices on a mass scale to run your auth system was dumb.
No, I'm holding my S21 in my hand, unlocked. About 1/3 of the time, there's no notification. Or it takes five minutes to arrive.
This is only one of many problems I've had with Google recently. I went from haphazardly trying to avoid their products for privacy reasons to now putting max effort into minimizing my Google usage because everything they do is badly broken. I've just spent two weeks getting my business re-listed on Maps after it was flagged for no reason. Impossible to talk to a human. They give you a number to call on every support email, but it's for advertising support who can't do anything about a suspended business account and are very surprised (refuse to believe in fact) that the Google business team gives out their number for support. It took me half an hour of repeating the question in different ways for the Google Ads support woman to admit that there is no way to talk to a human about Google Business.
Welcome to the club. I still have a gmail I use for some family and old friends, and there's a lot of history there, but I generally avoid using it unless it's a throwaway now. And... when dealing with clients, I suggest alternatives to GA, google maps, etc. Occasionally they override me, but I'm helping to get alternatives out there.
12-15 years ago, I was using google pay/wallet/something to accept payments for some projects. They just... rebadged it, changed the terms, etc. I couldn't even figure out what it was being changed to entirely, but, they seemed to not give a shit about orgs like mine who were trying to use their products to conduct business, so I gave up.
I've told my stories to many folks in person; I'll get "oh, you didn't understand abc.." or "that's never happened to me - you must have done something wrong". About a quarter of those people later indicate to me that... yep... they've been hit by some weird google bug or issue or deprecation or abandonment that cost them time/money with no real support options, and they then take steps to get off the google train.
I know I still have a ways to go to get anything critical to my life out of google's way, but every month I get a bit closer.
The problem is, as a small brick and mortar business, there is no alternative to Google Maps. I mean, sure. Technically alternatives exist. But if your customers don't use them, they are meaningless.
Of course we are listed on OpenStreetMap. But my guess is that since we opened, the number of customers we got that way rounds to zero.
Meanwhile, we get delisted from Google Maps and our revenue instantly drops about 70% (we are in a tourist area). It sucks. There's nothing we can do about it except make frustrated posts like this.
Exactly, I'm seriously considering taking my business off google's ecosystem because of the unwanted "we've sent a notification to your phone" confirmation requirement. I could understand if they did that if I suddenly tried to login from the other side of the world, or let's say I always use Linux and suddenly my browser identifies as windows etc. But if you're running a privacy focused browsers (ungoogled chromium) it happens every single time you're trying to authenticate. From the same IP.
What if my phone is dead? What if I lost it and I'm trying to recover location service credentials by sending a password reset to my email and I can't login? Then the whole "answer some questions" dance starts. If a hidden and unaccountable algorithm decides you running ungoogled chromium on Linux is suspicious and you happen to misremember an answer to some question it might ask (what percentage of available storage are you currently using?) good luck gaining access. BTW, I a paying user of Google.
After I deleted my personal google accounts, I was left with work google accounts I would have to maintain.
I have been bitten by this problem more than once, resulting in:
- losing some accounts forever
- losing temporarily access to accounts, preventing me to work for some time
- forcing me to go through recovery procedures with tedious docs and hostile UI, wasting my work time
I eventually found a trick: buy 3 yubikeys, attach each of them to all your accounts, have one on my key ring, one in my desk, one in my bag.
Now, the only thing google ever ask is the yubikey, no matter where I connect from, and I always have one on me. It doesn't require a smarphone, a phone number or an email.
I'm still trying to get rid of as many google accounts as I can, though. Personally, I'm very caution about any dependency on google services, and professionally, adamant to avoid dependency as much as it's reasonably possible.
I used to be a google fan 20 years ago. Between the bad user support, the privacy invasion, the decreasing search quality, the monopolistic practices, the censorship, the DMCA situation, the product cancellations, the term of use / price switcheroo and those shenanigans, they are consistently destroying my faith in them.
But they will not pay the price for it. First, they have enough money to make mistake for a long time without even noticing. Second, they will pull off a Microsoft PR stunt in 15 years, and everybody will forget and forgive.
Somewhat unrelated, but I got one of those Google Titan fobs. The one time I needed it to work - authenticating from a new-to-me- computer - it just... didn't work. I plugged it in and... nothing. No popups, no reaction at all. Thought it was broken, but it worked back on another computer when I tried it later. No idea how that this future is supposed to be better. Perhaps titans are just duds? A couple yubikey-focused friends have used theirs for years, but I wonder if they only talk up their successes, and don't mention the failures?
Yukibey based workflow are finicky. Sometimes, I need to try several times and reload the page or unplug/plug back for it to work. Sometimes, I need to switch key.
It's like arch linux.
Everybody tells there is never a problem with it, because, well, geeks lie.
I use Yubikeys (on my Arch Linux machines..) - only problem I've had was soft-bricking one by entering the wrong password (GPG passphrase) more than my max. (Ironically while setting up another as a spare - with a different password, then mixed them up.)
My company uses these tokens and buys Yubikey or Titans depending on price. They are pretty similar although Yubi has more features (that we don’t use) I don’t recall having a failure over about 3000 devices. Usually the issue is people lose them.
You have to have a system that makes sense to use them successfully. The upthread guy is talking about multiple accounts lost forever, etc. Sounds like a mess.
The same problems exist on other platforms. Ever support challenge response tokens? Lol.
The only time I've had frustrations with my yubikeys is not being able to redirect them in some RDP sessions when logging into an account that's wanting a yubikey to authenticate. Otherwise, my keyring one that I use the most (original NFC) is nearing a decade old and never fails.
I use it almost exclusively in Windows and ChromeOS.
> But if you're running a privacy focused browsers
I'm sorry but I was with you until here. if you're going to run a privacy enhanced browser and then complain that it has a privacy enhancing features which result in providers being more cautious because you have privacy enhancing features, then I'm not sure how to help you. I run anti-google adware on my main browsing identity but when sites give me shit I just turn if off and reload.
You have a point, but my complaint is that a service like Google shouldn't work based on a binary choice. You either use a persistent session cookie and you're logged in at all times (even if someone else launches your browser), or your session is 100% untrusted requiring both a password and your phone to confirm its you.
There should be a middle ground. If I'm logging back in 2 minutes later from the same IP, using the same browser on the same is, just ask for the password. Or even better let me choose if I want to use that "phone auth" option in the first place.
I have an account with backup 2fa accounts that I can't get into as Google insist on approving it on a dead phone because my geolocation is different to when I last logged in
I'm pretty sure they've existed since before TOTP was an option (I created mine in 2012, which was before I used any two-factor at least), but you have to go into your account settings to enable them:
Nit: Passkeys don't get sent to Android. Instead, the Gmail login would ask you to prove you possess the passkey on the Android by scanning a QR code displayed by the browser.
However I totally get your point. The UX is confusing. The terminology is confusing. Even if each step in the UX gives you an explanation of what it does, it's still useless because people are trained to skip the small print with explanation and click the biggest, most colorful button, especially when they are in a hurry. It's a genuinely hard problem for UX design.
The thing is the ergonomics is so much worse than just a password. People that are able to remember secure passwords should be able to use them. A service not offering this option is not going to get used.
We already had some that were proud they have some allegedly superior auth scheme that relied on infrastructure I do not necessarily trust at all.
This sent me over the edge. It forced the realization that Google does not make decisions based on users. I know, I know, obvious - but even though I "knew" that, it had not clicked that obviously they would make decisions BAD for users. Google makes money from advertisers. They provide just enough service to users that does not send them away. And just enough to keep competitors at bay. As long as users don't leave they don't really care -they don't have to care.
This is the way it is. If we want something different we need services not paid for by advertisers. (And no. While google exploits users one way, apple does another. Picking your poison is not a choice).
To be fair, this is really just Google being idiotic with their product UX and has nothing to do with passkeys. I think most anybody would be confused if Google randomly changed their login UX to something users aren’t familiar with.
Yep, this is how Comcast is for me. I never get the notification for the "registered device" even though it's a newish popular phone. I don't trust passkey because It relies on a certain device, instead of something you can just write down or save to a browser use on multiple devices.
If I lose my phone, I can't get into my email to do anything else.
Since Apple didn't actually define it, this left a void for our thought leaders to answer that question for users hungry to know "what indeed is a passkey?".
I have always understood that Apple defined a Passkey to be a key pair that is synced through iCloud Keychain. Even their WWDC 2021 presentation distinguishes passkeys to be different than security keys because they are "always with you" (the device sync aspect) and "recoverable". I think the definition was later extended to other cloud sync methods.
I also think the article makes the wrong trade-offs. Security keys are not important [1]. They are only used by a negligible number of technical users and a small number of companies that really care about security. Getting people off passwords is necessary for improving web safety and 99% of the population is never going to use security keys unless they are forced to. Passkeys do have a good chance of getting people off passwords, especially with deep OS integration. We shouldn't optimize authentication for that 1% or less because they'd be running out of resident key slots.
[1] I am the owner of 3 Yubikeys, 3 Yubico security keys and a SoloKey.
> I have always understood that Apple defined a Passkey to be a key pair that is synced through iCloud Keychain. Even their WWDC 2021 presentation distinguishes passkeys to be different than security keys because they are "always with you" (the device sync aspect) and "recoverable". I think the definition was later extended to other cloud sync methods.
Their goal was an industry initiative, not an Apple Passkey product. By the time they were released in 2022, the definition loosened to be an experience, e.g. discoverable and providing the option for user verification.
The user can choose whether or not to use a passkey provider that is backed up/recoverable, and the relying party gets a signal to this effect. They might use this signal to determine whether to prompt to remove the password login option.
Based on FIDO standards, passkeys are a replacement for passwords that provide faster, easier, and more secure sign-ins to websites and apps across a user’s devices. Unlike passwords, passkeys are always strong and phishing-resistant.
> "The cryptographic keys are used from end-user devices (computers, phones, or security keys) that are used for secure user authentication."
"The cyptopgraphic keys" is casually mentioned here with an implied reference to being passkeys. It never explicitly states passkeys are, in fact, "cyptopgraphic keys".
I am a pretty technical user, and I would rather become a farmer than move to whatever "passkeys" are. Yubikeys or phones or whatever, I've had too many of these things go bzzzt, go missing, get wet, get broken, etc.
If a "passkey" is as reliable as my house key or car key, i.e. I can accidentally put it through a wash/dry cycle, then maybe. Maybe.
The nice thing about a username/password combo is I can remember them and use them everywhere. It's really straightforward. Whatever gimcrack method people use to implement "passkeys," does it work everywhere? Guaranteed?
I get it that there are some use cases where you need to have a hardware device, a passcode, a PIN and the blood of a left-handed virgin before you can access something, but those are edge cases. I almost never say this, but seriously, it would be easier and less troublesome to "educate users on the utility of passphrases instead of short passwords" than to make passkeys a thing.
> The nice thing about a username/password combo is I can remember them and use them everywhere.
The "use them everywhere" part, combined with not needing special software or hardware to use them, are the things that will keep passwords central to my authentication world for a very, very long time.
why does that even exist, that shouldn't be an option
this stuff is why I have been so worried/skeptical about Passkeys and the people related to it.
They have the responsibility to design their protocols to not be a tool well suited for big coperations like Microsoft to seriously mess up security, compatibility and enact all kinds of "bad faith" market practices to kill competition.
But instead again and again in their posts what they write, publish and explicitly how they do it is more like "fuck you, we make abuse extra easy".
It's not just this nonsense about residual keys, but also e.g. how attestation is handled (and can be trivially abused to kill companies).
I'm pretty sure the goal here is to turn your phone into your passkey, _and nothing else_. Everything written in that article makes sense if you keep that in mind.
I had the misfortune of getting into a cycling accident which broke my phone display (completely lost display output and touch input), and it meant I lost access to all my OTP 2FAs for a couple of days (which is actually kind of scary).
I was able to fix it myself by getting parts and going through an ifixit guide (right to repair anyone? ;-), after which I promptly exported my 2FA seeds to (1) a backup phone (2) KeePass, which apparently supports them, who knew... and (3) a QR code on a printed piece of paper.
Maybe I'm getting tinfoil-y here, but I think the horribleness is the point: consider how eager Apple in particular is to get people fully enmeshed in their services ecosystem. You're a lot less likely to try to roll your own backup, or otherwise exit the walled garden, if doing so means your entire auth story is irredeemably fucked.
The thing that strikes me about this whole story is that during a lot of the initial discussions of passkeys, a common point brought up on the anti-lockin side was the ability to use non-phone providers like yubikeys. If the actual implementations make this less viable, as discussed in the article, then that shifts power towards lock-in.
Not tin foil — Apples privacy pushes (in some markets) are based on driving up lock in and benefiting their ads and apps. Any consumer benefit is a second order impact.
I'm pretty nervous about Passkeys for exactly these reasons, and I'm still not at the point where I feel comfortable advocating for them, but I'm forced to admit that if anything, Apple has (so far) arguably done the best job of any of the major tech companies at discouraging vendor lock-in with Passkeys.
Blocking attestation requirements, opening up 3rd-party providers earlier, and (I'm not sure if it's released yet) committing to search. I even saw recently that they're releasing Chrome/Edge extensions for Windows to sync keys.
Do I trust it? Ehhh... I still can't generate passkeys on Linux as far as I know, so I'm definitely not going to be using them any time soon no matter what. There are still articles like this pointing out abusable features that I'm not sure should even exist in the first place. And it's honestly just going to be a while for me to get over the weird amount of advocacy that so drastically misunderstands what portability even is in the first place (no, 1Password does not make passkeys portable, standardized export/import formats as a requirement for certification make passkeys portable).
But I think the signs are that Apple is caring a lot more about avoiding vendor lock-in than Google/Microsoft are right now, which is a very weird thing for me to say.
That's indeed scary. Losing your 2nd factor shouldn't block you from accessing your accounts indefinitely. There should at least be one recovery path outside 2FA be it "printed recovery keys", "email recovery", "support channels", or even (despite being fully insecure) SMS maybe with a grace period (like 48 hours). Backing up your 2FA secrets isn't user-friendly at all, and it's even harder after you've started using it.
Yes, those recovery paths are also susceptible to phishing, scam attacks, and they should be designed with that in mind like, for instance, with ID verification, notifications from multiple channels, process delays.
Everyone should go over their Authenticator app and check their recovery options with every account they have there to make sure they don't fall into this trap.
> Losing your 2nd factor shouldn't block you from accessing your accounts indefinitely.
You’re absolutely right and also you don’t have to worry. Everyone who operate auth of any sort will be forced on day one to have reasonable recovery. Nobody is gonna lock customers out because you lost their super-secret private key.
In practice, it goes back to email recovery for 98% of services. This will remain true with passkeys. Just like people forget passwords today, they’ll lose their passkeys tomorrow.
An average user can keep perhaps 5 decent-entropy passwords in their brain, at most. Thus, they can be useful for master passwords and your bank account. Other than that, it’s an elaborate dance of reusing existing passwords, using low entropy ones for low-value services, and for some of us, systematic usage of pw managers. But it’s still a dance, extra steps. And the UX is fragmented at best.
Passkeys has a chance to greatly improve both security and UX of happy-path auth. That’s not bad at all.
Personally, I’m more worried about spec bloat. It looks like tons of knobs and flags already, imagine where we are in 5 years.
I’m also a little worried about public computers, shared devices, borrowing etc. Not everyone has a personal $1000 phone. This whole “my own device” assumption is a first world bias, and the experts should know better.
>I’m also a little worried about public computers, shared devices, borrowing etc. Not everyone has a personal $1000 phone. This whole “my own device” assumption is a first world bias, and the experts should know better.
Exactly. That's why I'm in favour of SMS based auth for really important services like banking and government services. Your phone got broken/lost? Most providers offer replacement sims immediately or in 24h at worst around here. Your phone is broken, but you have an old 20 year old flip phone? Pop your card in there (with a plastic frame) and you can still authenticate (another reason against non physical sim cards).
I saw mentions that SMS is insecure, but I never heard about a credible remote attack that didn't include provider cooperation or taking over your phone. (someone driving to your house and setting up a fake cell tower in your yard doesn't count for most people). Yes, the user experience is annoying to say the least. Consider the current system we have in Poland for government services (taxes, health care, driving agency, benefits). Most people use their bank as the auth provider a typical session would look like this:
- go to a gov website, click login, be present with various login options that include personal certs etc, choose your bank from the list
- login to your bank's system the normal way (with physical 2fa if you have it, printed keys, or SMS).
- then the bank asks you "do you want to provide auth request that contains these personal detail to _gov_agency_A, if yes tick these 4 boxes that absolve us from anything you do down the line
- then they send you another SMS to confirm finally authenticating you to the site
- you can browse the site etc, but let's say you want to send in a document that requires signing, you fill that document online and they ask you "select your signing provider" (despite already being logged in
- you select your bank, you go through two rounds of SMS again to sign the doc
- phew... Done
It's rather elaborate and hinges on SMS being secure. Most older people get lost at around "tick these 4 boxes" part, so most likely the whole process is being done for them by a local gov/library/internet cafe employee or a relative.
Is it secure? It's pretty annoying to use, but personally I consider the security adequate. I
SMS is monumentally insecure and any suggestion to use it as an auth factor (much less a recovery mechanism) is wildly irresponsible. Not only is SIM swapping as easy as convincing the teenager at your local phone store that you lost your phone and want to pay his commission when buying a new one, but the SMS protocol itself is unencrypted and you can MITM, or just straight up spoof it with a few grand worth of equipment (see "stingrays" for the professional version of this). This _abolutely_ happens all the time in the EU too.
> I saw mentions that SMS is insecure, but I never heard about a credible remote attack that didn't include provider cooperation or taking over your phone.
Technically correct, but how likely those attacks are seems to depend on the country. While I'm not aware of this being an issue in any EU country, sim swapping is a common thereat at least in the USA. Yes, American providers really need to improve the security of their procedures, but this means that in the current situation the security of any authentication flow based on SMS heavily depends on which country the user is located.
> In practice, it goes back to email recovery for 98% of services. This will remain true with passkeys.
My understanding is that passkeys are really intended for the non-technically-skilled users.
So then, how will passkeys succeed with that audience? A high percentage of them already routinely use password recovery mechanisms rather than keeping track of their passwords. That's an established habit. If they can keep doing that, then why wouldn't they?
Yeah. They’d only switch if it’s discoverable and easy, (perhaps their browser + the website presents the option).
Also, the “forget password” flow itself has opportunities for happy-path improvements that are much more simple than passkeys. I have thought for a long time we should embrace that and lean into it, perhaps change it to a better “magic link” type of flow.
My point here is to note that "phones" are not a good 2nd factor, unfortunately, because they're not that durable and are kind of targets of theft. So moving to solely rely on phone sounds like a bad idea.
In my case, this was not the end of the world since I use a Yubikey for Google rather than TOTP, so at least my core email services (which represent a huge identity provider) were fine.
(This is also the reason why I could afford to wait to get parts and fix the phone rather than get into some panic mode of having all my digital accounts in a state where I might get locked out at any point.)
Yes, I even use multiple FIDO2 keys for both convenience (some stay plugged in to my machines) and as backups. I find Passkeys convenient too, but the author's points need to be addressed, I agree.
Using "something you have" as an extension to "something you know" is essentially the point of all of 2FA. That's why backup 2FA methods are essential; if it's not extremely hard to get access to your account(s) after you lose all of your second factors, 2FA is pointless and you could've just sticked to just using passwords.
That said, passkeys aren't necessarily second factors; they can be a relatively secure first factor as well, basically acting like long, randomly generated passwords that are impossible to be reversed or to be used in credential stuffing attacks.
See, this where the metaphor breaks down. At no point was the phone "lost". The 2fa tokens are perfectly safe yet there's no way to get to them... even though you still "have" the things, you can't prove you have it.
Which is why having 2FA _solely_ on a phone (like OP implies) is a bad idea. It's a fragile device that can easily render you unable to prove yoh still have it.
If I break my house key I can know exactly where the broken parts are but I still can't unlock my front door. "Broken" and "lost" are the same things here.
Of course you shouldn't rely solely on your phone, that's what the recovery keys any decent website makes you save or print out are for, or the other alternative 2FA options.
> If I break my house key I can know exactly where the broken parts are but I still can't unlock my front door.
This actually isn't true (it's exactly why digital 2FA is different!) If you break a physical key (already much less likely) you can still read the bitting (~ password) off of it. Bring the broken pieces to a competent locksmith and they can originate a new key for you. 2FA doesn't let you do this (intentionally, it's not a bad thing but it does mean recovery is harder).
> recovery keys any decent website makes you save or print out are for
Right, and almost all of the services that I still have on TOTP 2FA are not decently implemented... and do not have the concept of recovery keys (they are actually a somewhat recent inclusion in the setup process)! Sites that are modern enough to have made recovery codes usually also support HW tokens which I would've used instead.
Everyone has their own security risk profile. If someone decides that effective 2FA isn't worth it considering their own profile, that's legitimate. It's not "cheating", it's finding ways to work with the system you have in the way that you deem best for you.
A 2fa device, perhaps? Access to Bitwarden requires 2fa, and confers on the device I logged into the ability to 2fa for other purposes. I don't leave Bitwarden unlocked for more than a few seconds, so the two factors are "having a device on which I've logged into Bitwarden already" and either "being able to satisfy a biometric sensor" or "knowing my passphrase".
I do also have the ability to bootstrap Bitwarden access on a new device without an existing device -- the two factors then being "knowing my passphrase" and "having my security key".
> I had the misfortune of getting into a cycling accident which broke my phone display
oh do not worry we got you
our new phone backup program did back up that important secret of yours
I know you disabled the backups because you didn't trust us but because people lost access to our services we just enabled it anyway and it can no longer be disabled.
yes we know that after syncing TOTP secrets in plain text no one trusts us but how else do you get access to your secrets again after you lose your phone, or have you forgotten that it was also your only access to the 2FA of your google account?
now you can just go to your internet provider and get a copy of the secrets they wired tapped for you for only 5 99€ as you agreed to in the fine prints of you latest phone contract
it's easy don't worry, so easy that even that new police man which always gets everything wrong was able to get your passkeys last week. Why? Uh. idk. he had a judge signed letter something about impersonating you so that they can trick and jail someone called Tom who annoys them due to his ani-corruption protests. Hm, I think hat same Tom you labeled as "best friend" in your address book. But don't worry he will never blame you for it. I mean he died a day after you last saw him a half a year ago and the person you have been speaking with was just a hacker who used his passport to get his passkeys from us. AI voice and video generation has come quite far hasn't it.
Yes, KeePass (and KeepassX and KeePassXC) support generating 2FA codes, but it is generally not a good idea to store those alongside your passwords, as if you do those aren't a 2nd factor anymore. You can mitigate the issue by having an encrypted db only for 2FA codes, but I would still advise having those in a completely separate app anyway.
Apple opened up OS integration for other applications. 1Password is currently doing beta testing of their Passkey implementation.
Besides that, the whole idea of Passkey (in contrast to what this blog claims) was that the key material can be synced between devices, so I am not sure how only the phone would be 'a passkey'. iCloud Keychain syncs my Passkeys between all my devices, including to my MacBooks.
The problem with cloud-sync-based managers like the iCloud Keychain is bootstrapping. Since you need to be able to log in to the services themselves to provision access to the passwords.
This makes travelling a bit risky, since it's not that hard to lose/break/have your devices stolen during a random trip. This makes it immensely hard to recover, since you cannot just hop onto a public terminal and authenticate (which might involve entering 2FA codes etc that you cannot get anymore).
This is why physical tokens are still quite useful. They're rather unattractive to thieves, don't require its own Internet connection to work, and they're relatively small and cheap so you can get a bunch them to stuff in various places increasing your chances of having one still available to you.
Knowing ahead of that problem, you can plan a solution though? Everything I have is cloud-synced and even if I lose my phone right now in a random country, I definitely know how I can recover all my 2FA tokens and logins from a random terminal (or preferably a new boxed phone) -- I DO have to remember some passcodes which I otherwise never use but that's not too hard.
If Google or Apple implements this, they can design a solution too.
> I DO have to remember some passcodes which I otherwise never use but that's not too hard.
Right, but you're giving up a lot of security to do this, since it implies with these rare passcodes someone else could also bootstrap your logins.
With HW tokens, you don't have to worry about recovery passcodes being leaked/hacked (the recommended procedure today is to print out the recovery codes and destroy digital copies).
This idea of printing out recovery codes seems so deeply out of touch with basically everyone leading a modern digital life that I can't believe serious security experts actually recommend it with a straight face.
No one has a printer anymore, and I'm sure as hell not including a trip to a local printshop as a part of signing up to 2FA for some random site (yes, even Gmail).
Of course to do that I need to go to a public library where I have no idea if they keep copies, and where someone might mistakenly take them from the printer, which is very far away from the computer you must use to print.
Don't explicitly label the codes with their corresponding accounts on the print out, just print the actual codes. Write the corresponding accounts in later. Recovery codes don't do much unless you know where to use them.
grab a pen and something to write on (worst case: buy a pen and write on receipt paper). write down single recovery code to cloud storage thingy. store other recovery codes there.
You could just store them in a separate password manager like BitWarden? Or even encrypted in a separate Dropbox account?
Ultimately if you want to be able to recover your identity from anywhere in the world with absolutely nothing on you except cash (to buy a new device and service), you have to store this data somewhere. And you wouldn’t store this data in the same place that you’re trying to recover because that’s not very useful.
Is it without risk? No, but there is no risk-less way to be able to recover a piece of data once you lose all your possessions somewhere random in the world because the only thing you have left that you can still use is what you know.
You can lose hardware tokens in the same way you lose a phone? Then you’re just as screwed?
This isn’t a hardware token versus passkey problem. It’s a problem period if you store a piece of vital data on a physical device. You can lose it, period.
The only way to restore that piece of vital data is to have a backup. To have it restorable from any connected part of the world with complete loss of your personal artifacts, either you need an very trusted intermediary that you can contact or you need to store it somewhere Internet-accessible, preferably encrypted with a key that you can remember.
"It's easy to lose devices when you travel so just get a shit ton of devices. If you're a big enough of a fuckup to lose them all then you've got bigger life problems anyway."
I like good advice that, upon hearing it, seems obvious enough it can be misinterpreted as a dig at one's competence. I'm much more likely to follow it and get my ass saved (I'm the kind of medium grade fuckup that would lose all but one of them).
That's exactly what's happening. Is it really shocking that a collaborative standard is designed to benefit the entrenched big tech companies at the expense of users and would be competitors? Funny how the standards never "accidentally" favor the user.
> This leaves few authenticator types which will work properly in this passkey world. Apples own passkeys, Android passkeys, password managers that support webauthn, Windows with TPM 2.0, and Chromium based browsers on MacOS (because of how they use the touchid as a TPM).
All of those platforms, with the exception of password managers (which will be forbidden by the vendor lists), also have the compute needed to evolve the system into authorized actions that, IMHO, will eventually lead to devices where specific actions within apps are allowed / disallowed and enforced by the systems that are being sold as authentication (for now).
As soon as those tech companies get an encryption / signing key they effectively control (via requests as the relying party), there's going to be a lot of incentive, and ability, for them to seize even more control over our devices.
My guess of what will happen is if a service sets `rk=required` and you are on a platform that doesn't want to (or can't) enable/support it, the process would always fail and you wouldn't even be able to register. Which seems like a shoot-yourself-in-the-foot kind of move if the goal is to onboard users and get more business...
Didn't they already figure this scam out with phone verification? You let the user signup easily and then rug pull them a week or so later forcing them to do the thing they would have balked at if it was an upfront requirement. Now they have to jump through a bunch of hoops to get something they've already put time into working again.
Even if they abandon the account you still made your signup metric go up and probably collected some usage info and maybe PII. And you can also block them from signing up again until they do what you want.
Hardware tokens (Yubikeys, etc) are signed by their vendor. They support attestation which allows q site to disallow vendors not in a white list. Some banks (Vanguard was/is one) actually enforce this preventing all but a handful of hardware keys from working with their 2FA.
> Chrome’s users have an interest in ensuring a healthy and interoperable ecosystem of Security Keys. To this end, public websites that restrict the set of allowed Security Keys should do so based on articulable, technical considerations. They should regularly update their set of trusted attestation roots that meet their policies (for example, from the FIDO Metadata Service) to ensure that new Security Keys that meet their requirements will function.
> ...
> If Chrome becomes aware that websites are not meeting these expectations, we may withdraw Security Key privileges from those sites. If Chrome becomes aware that manufacturers are not maintaining their attestation metadata we may choose to disable attestation for those devices in order to ensure a healthy ecosystem.
e.g., it is acceptable for a bank or other public-facing site to say "we'll only accept authenticators which have been L2 certified via this independent program that maintains an up-to-date list".
It is not acceptable to say "we'll only accept this one vendor's products, and maybe another vendor after a 2 year audit if we feel a business need".
And Chrome has stated publicly that they will remove some or all of the WebAuthn API from your domain if you do so.
This principle will last until there is some major Chinese key provider that adheres to all the standards. Then, we'll go the way of TikTok with "risks because of control by the Party".
I'm not a security or crypto guy at all. I found this very difficult to follow, and I suspect others might too.
My questions probably seem weird to someone with enough background context to understand the post, but I am getting wrapped around the axle every sentence or two.
> It all comes down to one thing - resident keys.
How/why? What's the connection to passkeys or HSMs?
> we need to understand what a discoverable/resident key is
Yeah okay... does this imply that all resident keys are discoverable keys? Or that all discoverable keys are resident keys? Or both?
> You have probably seen that most keys support an 'unlimited' number of accounts.
No. Does "keys" here mean passkeys? Or keys stored on HSMs? Or both? Or something else entirely?
> This is achieved by sending a "key wrapped key" to the security key.
Okay so an HSM can apply to an unlimited number of accounts because it can store... some kind of key wrapped in another key (of the same type? different type?)
I believe the primary point is that WebAuthn is being pushed to use a "passkey" model where each site creates a credential that consumes storage. Displayable site and user account names, a user record handle, and the private key all take up storage, along with a few other items.
A mobile phone could store 10 thousand passkeys without breaking a sweat. Modern hardware keys might only be able to store 25 total in available flash.
The reason for wanting this storage is discoverability - the ability to hit say GitHub.com's new passkey support on the login page, and log in without having to even type in an account name. The browser provides a password manager-like experience. Just like passwords in a password manager, the passkey locally becomes a record of an account on a site.
However, WebAuthn also has quite a few other, non-passkey modes. Non-discoverable credentials expect you to provide a list of handles for a particular user account, which were provided by the keys as part of registration. Only credentials which match a handle are given as options during authentication. For hardware security keys, they leverage this to actually store the record needed for the future cryptography in the handle itself - this mode doesn't take up flash storage.
So the user would type their username, some API returns a list of handles, and this could be used against a security key to authenticate - without no storage limitations.
The problem with this argument IMHO is that a lot of sites take a policy of not revealing if an account exists or not. We've seen recovery processes that say "if this account exists, you should receive an email shortly". A login process that provides an API to detect if a username or email address has an associated account and how many credentials have been recorded against it may simply not be acceptable to the site.
It seems more likely that we eventually have security keys with 10x the available storage, rather than sites adopting this process widely enough to make an impact against the current hardware limits.
> The problem with this argument IMHO is that a lot of sites take a policy of not revealing if an account exists or not.
They don't, really. Because they have to prevent me from registering an account with the same identifier (username/email) as someone else, thus revealing whether an account exists or not. So the fact that the password recovery page doesn't reveal it makes no difference for someone who wants to know.
Some sites can do this by going as far as to say they've sent the activation email, then instead sending an email saying that there seems to be a registration attempt against their existing account.
How useful this sort of hiding is in practice is somewhat debatable. Github, forums, marketplaces and social networks all tend to have profile pages. I'm more likely to promote these as part of my public-facing persona as well.
The problem in this context I believe - if the largest sites on the internet think they need to protect information about valid accounts, most people will have passkey "slots" on a limited-storage key fob taken up by those sites.
The discoverability argument is somewhat weak because your browser already stores and probably prefills the username.
About not revealing whether an account exists: A site could always reveal a set number of potentially fake handles.
So say a user has two handles registered, and the set number is ten. If the account exists, the two real handles will be in the list, alongside eight fake ones. If the account doesn't exist, all ten handles will be fake, but it's impossible to tell which case you're observing unless you have the key matching one of those handles.
Fake handles are possible, but nobody has attempted it yet that I have heard of.
There is no standard for credential handles (unlike what the article implied), so through heuristics you may be able to get some knowledge of which authenticator created them - and might be able to detect fake ones. You might want to pad both real and faked account lists to have the same number of returned values. These fake handle lists would also need to have some sort of heuristic to them - real accounts can change slowly over time, while fake accounts would be simpler to either always look static, or to regenerate on every call.
The system optionally saving a username is a nice convenience but doesn't really solve any deployment or security problems. Sites would be unable to rely upon that, and it doesn't help with information leakage.
You can save credential information in the client outside a security key and use that to 'upgrade' to discoverability - but you then have that security key only function on certain websites when using that client. You brought your own security enclave but are still platform-bound.
I'll try to explain their argument under the assumption handles were convincingly fake, e.g. there wasn't a heuristic to tell real and fake handles apart.
The underlying protocol (U2F or CTAP) will send all received handles to the separate hardware authenticator. Some of these may be real, some may be fake. Some may have been created by other keys.
There is a process to convert correct handles to a correct private key inside the hardware. This _should_ have some sort of integrity to prevent taking incorrect handles and creating garbage private keys as well - those will fail, but the user experience will be sub-par and there are always cryptographic concerns about processing attacker-chosen data.
So when I make the gesture to authenticate, the valid private key which came from a correct handle is used to sign a response message to the authentication request. All the fake handles and those created by other keys would be ignored.
So if the handles are convincingly fake, the web site would be the only one which would know which were real or fake (so that it can still offer proper user self-service management). An individual piece of hardware would know which were real handles that it created. An attacker wouldn't know if they were all fake.
It's a poorly argued point, agree. Essentially the author is arguing that the ability in the WebAuthN protocol for Relying Parties to be able to specify `rk=required` is considered harmful because it excludes tons of TPM hardware from being able to work as a Passkey wallet/db. I think most people in the comments probably agree. That doesn't excuse all the confusion the author creates by essentially bike-shedding the definition of passkey for half the essay.
The hype around passkeys is high enough that basically all authentication layers are requiring passkeys when they're available. This is a problem because passkeys must be stored in the client-side authenticator (password manager, hardware token, whatever), some of which have very limited capacity for storing them.
This is compounded by two problems: (1) Extant standards for storing these keys on hardware tokens don't allow deleting them individually, though this is changing in the newest standard; (2) Many current hardware tokens claim to have huge capacities, but this is based on a different challenge-response mechanism than passkeys. As a result, users will be pressured into using passkeys often, run out of precious passkey space despite thinking they have plenty, and then be forced to forego the benefits moving forward or reset and lose their keys.
The article assumes quite a bit of knowledge of FIDO2 and your confusion is understandable.
> How/why? What's the connection to passkeys or HSMs?
Passkeys are implemented on top of FIDO2 and specifically utilise the "resident key" functionality of the FIDO2 spec (according to the article; I don't personally understand passkeys). FIDO2 hardware authenticators are not HSMs exactly, though they are similar and some devices (like Yubikeys) are both HSMs and FIDO2 authenticators.
> Yeah okay... does this imply that all resident keys are discoverable keys? Or that all discoverable keys are resident keys? Or both?
In FIDO2 "resident key" and "discoverable key" are synonymous. "Resident key" is the term used in the spec, however "discoverable key" is commonly used. One of many such cases of FIDO creating confusing terminology.
> No. Does "keys" here mean passkeys? Or keys stored on HSMs? Or both? Or something else entirely?
Neither, it refers to FIDO2 hardware authenticators (e.g. Yubikeys) which are commonly referred to as "security keys".
> Okay so an HSM can apply to an unlimited number of accounts because it can store... some kind of key wrapped in another key (of the same type? different type?)
A FIDO2 hardware authenticator (which is not a HSM per se) can be registered with unlimited accounts because it is effectively stateless; it doesn't store anything (assuming that you are NOT using resident keys, which must be stored).
When the authenticator is registered with an account, it generates a key pair on the device (e.g. an EdDSA key pair). Instead of storing the key pair, it encrypts the private key with the onboard master key (e.g. an AES256 key). It then sends the plain text public key and the encrypted private key to the "relaying party" (e.g. google.com) who stores it. When authentication is attempted, the encrypted private key (i.e. the "wrapped" key) is sent to the authenticator where is decrypted onboard and then used to produce a digital signature.
Note: The FIDO2 does not actually specify how to implement non-resident keys - wrapped keys are just one way of doing it. FIDO2 only requires that the private key must be securely derivable from the credential ID (where the credential ID is actually arbitrary data which may or may not be a wrapped key).
That's a rather uncharitable take on the situation. I'll propose an alternative: If you want to take advantage of the new auth standard that will eliminate weak passwords and password reuse (thereby preventing 99% of casual account break-ins), you'll have to spend $30 to upgrade off the legacy yubikey you've been coasting on since 2013.
Why should I be forced to upgrade? Non-resident keys also eliminate weak passwords and password reuse. Using resident keys only add marginal improvement (ie. you can plug in a key and the service knows which account it belongs to), and that doesn't seem like a good justification to deprecate all the existing authenticators in use today.
As matthewaveryusa says above, you can have the key on the Yubi generate then encrypt the private key; that encrypted private key is then stored on mass storage (synced to iCloud etc). Then to use it you supply the key + data to sign the auth challenge.
My issue then is that these keys allow total tracking. We need hardware implementing more complex and privacy protecting schemes (BBS+ etc).
I would go as far and say it's a too charitable take.
Shared residual keys _should not exist_ (outside of short term temporary usage, e.g. not 2FA/FIDO).
They are a liability, they are a security risk, they promote bad security practices.
Best example TOTP (which from a security POV is quite flawed). You don't want to ever share the shared secret across devices (or back it up) but due to it being possible and flawed 2FA implementation being the norm not the expectation you are kinda forced into it. And as thinks look now passkeys will go into the same highly flawed user hostile direction.
> You don't want to ever share the shared secret across devices (or back it up)
Hard disagree there. I do not feel comfortable unless I can backup a key. Phones get lost/broken/stolen all the time. Is it less theoretically secure? Sure, whatever, but I am not James Bond.
The point is you would have a different key on different devices, each of which can access your account. This gives you a backup, in fact a better one, because if one is comprimised and locked out you can still use the others. The main challenge is automating this process so you can properly mirror your keyring across multiple devices, which I don't think there's a standard solution to. So it would be a manual process for each account, which kinda sucks.
At the moment, yes, that's the process, annoyingly. Ideally, you would sign up with one and then be able to automatically enroll the others, which is in principle possible if you don't use resident keys and instead each device has the public key of the other devices you want enrolled at the same time, but I don't think is currently supported by the standards.
There have been many proven approaches how to solve this.
For example "blessing" the enrollment of a device using another one, potentially across physical locations (i.e. similar to what discord and steam did at least for a time as far as I remember).
> How would you sign up a new service under this scheme?
The usual solution for this is to have multiple keys. It's logically equivalent to having a backup key, but it's more secure because if you lose a key, you can use another key to disable the lost key.
> It's logically equivalent to having a backup key, but it's more secure because if you lose a key, you can use another key to disable the lost key.
That's slightly more convenient but I don't see how it is more secure. With one key that has backups if I lose that key I can use one of the backups to disable that key.
Multiple keys is slightly more convenient in that scenario because with multiple keys I just have to disable the key that was lost, and then make a new key for the device that held that key and install it. With one key on multiple devices I'll have to install the new key on all of them.
Convenience is a key aspect of security, but consider the scenario where you have to replace all your locks while you issue a new key... you have to keep the extant key valid for a longer period of time.
The same is true for physical keys for cars, houses, lockers, etc., which is why people have an intuition to test out keys to make sure that they work.
Most people aren't going to do that for the standby keys. And while they test out the real keys, they mostly don't go from working to not working because someone did a garbage collection/unused keys pass or failed to update some field or deleted something on a server.
Yup. Instead they stop working because of rust, corrosion, wear & tear, or heat distortion from improper storage...
There seems to be an obsession that if a digital key doesn't comprehensively solve all problems, it's terrible, despite the empirical evidence that people are fully capable of using physical keys despite their limitations.
you don't need to backup a shared secret to gain exactly what you get from backing up a shared secret, except more secure
you have a backup of a _different_ secret with a similar degree of "authority" (or if it's "copyable" with the only authority to be used for restoring 2FA once or similar)
then if you backup gets stolen you can just go into you account management API and disable/delete/flag it, in that case even if encryption is broken as long as you act fast enough the damage is trivially and conveniently contained (e.g. some password managers had insecure backup/storage in the past)
with the same key not only do you have to disable it, you first have to create a new key and then sync it to all your new devices and backups and then disable the old key, which isn't grate if you have more then one device or some of you devices are temporary out of reach (e.g. you one a business trip)
it's like reintroducing the "physical" problem of having to replace all locks when you loose your house key in a situation where you could have all the benefits one key per lock and a different door for each person (i.e. device) without any of the overhead/drawbacks this would normally introduce
The main issue I see happening here with a large list of keys is the lack of an automated way of making these backups: this would require a standard way for a backup system to use one set of secrets to authenticate another set of secrets, which AFAIK doesn't exist for webauthn (it must be initiated by the site, all of which will have different methods of doing so). Otherwise you would have to manually enroll multiple devices for each account, which is both painful and error-prone.
and then use it with a backed up secure cold secret storage
OR (if you can, only for technical versatile people not a general solution at all):
as strange as it seems even with all the fancy new technology as far as I can tell the most reliable solution for long term account recovery(1) is to get a very small number of long ungussable one time use recover keys you encrypt in a blob and print out base64 encoded as a qr code(s) or similar and then put into a save, maybe in a bank, maybe more then one print
This solution while AFIK more reliable then any fancy technical solution is imperfect as in:
1. it isn't viable for everyone (i.e. you need a reasonable accidental damage save place which is preferable not in your home)
2. it requires the user to do the right thing
3. has some initial one-time time cost
This means it's not viable to be used for every single service.
Through you don't need it for that either, instead you can use it e.g. for a slow fallback to access encrypted blob storage in which you stored a database with one time code for resetting you various services. Then every time you sign up new services you extend that storage using you hardware bound keys and if you ever loose access to all hardware bound keys (unlikely to ever happen if you just act with a bit of care) you can go through the annoying process of getting you papers, scanning them, decrypting then and getting your one time reset codes.
Through now that I have already gotten way off topic, what I want is neither passkeys or having separate keys enrolled with tens of services. I want to have widely used standard interface where I can use the identity provider *of my choice* with *any* service (which is also easy to integrate for services).
There is AFIK no technical reason this doesn't exist and if we had that there wouldn't be any need for discussions about passkeys and password etc. Because for most people there would only be one or two logins + 2FA.
This doesn't really change much, though? My keys can only have 25 resident keys on them, and I also have more than 25 passwords stored in my password manager.
Password managers can store passkeys. I plan on storing passkeys in a password manager for most accounts, and then moving the few that matter to be resident keys. The theoretical advantage here is twofold:
- Passwords are not guessable any longer
- Password managers don't expose secret material in normal operation, because they sign requests with keys stored in TEEs (i.e. most modern devices have an embedded security key)
password managers are a security liability which only exists because of how flawed password are
the original design of WebAuthn was all about taking both password and password manager out of the equation noticeable reducing the attack surface
instead how it now looks they will make password managers mandatory
until they make "blessed" storage mandatory basically now controlling the password manager and HSK industry (by deciding which ones work with their products) and then maybe kill the whole industry by only allowing the storage build into Android,iOs,Windows, etc.
And while stuff like this sound like a crazy conspiracy theory in the past the more I look into how passkeys developed in recent years (especially how they where represented) the more stuff like this sound quite viable. I mean big coperations which frequently have been found to abuse their power and try to get vendor locking wherever they can afford to, pushing a technology which looks like an improvement but can easily be abused to facilitate vendor lock-in and control over parts of an industry with the goal to abuse that... that isn't anymore conspiracy territory, that is what Microsoft has been doing in the past non stop and only stopped doing because it was no longer monetary beneficial for them. But in this case it would be. For them and Apple and Google and a few other huge companies.
If passkeys become defined as resident keys, is this still true?
And if this is acceptable, honestly, do we need a new standard? Password managers exist today. Such that I already do what you are suggesting here with passwords. Does it really become much more secure by the move to passkeys?
Passkeys are for the people that don't even use password managers outside of what Apple or chrome provides by default if at all. Passkeys are trying to eliminate those ad hoc solutions by providing a different system. The transition will be slow and messy requiring most people to use passkeys and passwords (and maybe password managers) for a while.
But if the passkeys are copyable off of where you are storing them, then I'm not entirely clear on how they truly up the security?
I mean, I get the obvious ways that a challenge system is better than a bearer token. But I feel a ton is lost as soon as you move to the exportable keys.
Love to see an exploration on these topics. I confess I have not been following them much, lately.
I think the general idea is that the vast majority of people have a smart phone, so the security model is to let people use the phone as the "key" to access services and take advantage of the biometrics/pin security as the main component of security access. This means that there are a lot of security compromises that make sense in the name of ease of use.
This model has been tested to some extent with Apple pay and Google wallet which people take relatively seriously since there's money involved. I think the model makes sense to improve security for the masses, but it's not good for people that want and demand more (like people that already bought YubiKey products).
Oddly, pay/wallet work for completely other reasons. Largely in the absurd amount of monitoring that the credit companies do to watch your transactions. That and the general legal framework around charges.
Consider, that is largely replacing 20ish numbers with something else. Is slightly more convenient for folks, as you have your phone with you a lot.
So, for the passkeys, I know that there is a secure enclave in phones. I was not aware that they could store resident keys. Know what the limits are, there?
I think the difference is that good passwords are still replayable. Such that moving away from bearer style tokens is a win. But if I have no way of controlling the use of individual passkeys, they lose a lot.
My worry is in the future there may sometimes be no other option than to take this "advantage". If a site implements the new auth standard, will it also keep the current username/password/2fa as an alternative option?
I think that probably depends on popular adoption. If going passkey-only causes a significant reduction in users being able to access the service, then there will always be a non-passkey option.
Personally, I think there is significant friction to adopting passkeys, so there is little risk of being forced into using them in the near future. Longer term, though, I have no idea.
$25. The solokey 2 already used a STM chip that could support at least 20X the storage (in USB mode), but didn't activate it in their initial firmware..
Additional flash that is just as secure would be expensive mostly because other Smart Card uses don't need it, but it doesn't really have to be secure because storing resident keys could be done in a similar opaque style as a server and only really brought in to the secure context when needed.
Edit- misremembered NXP->STM and added USB as difficulty getting significant flash within the NFC powered chip is an important consideration.
Presumably Yubico's upgrade path is to tweak the form factor slightly so they can fit more than a few kb of memory into the thing. I know that it's possible, I can buy 50GB flash drives in the micro yubikey form factor, the ones that are just a rectangle of plastic that fits in under a USB-A port's tongue, and they only cost like $10. So it's probably just something that Yubikey needs to design into the next gen of keys, and I suspect it won't make them cost much more than $5 more than the last gen.
My hunch is low volume and an enterprise-leaning customer base. Engineers aren't cheap, and those who can build security-sensitive products even less so.
When I bought a (single) Yubikey from their website late last year, it was Fedexed to me directly from their Palo Alto downtown office, not some distribution center in the middle of nowhere. That can't be cheap.
If you order a key and it comes from an Amazon warehouse, are you going to be worried about a supply chain attack? Maybe that's a benefit of sending by direct FedEx?
Can you give me a high level description of why passkeys won't work with my current hardware key, and then explain why they went with that implementation instead of one that works with my current hardware key?
`rk=required` means your hardware is required to store each and every derived key, not just the master key, all in the service of you not needing to remember your username anymore. Current security keys can handle a couple dozen derived keys at most, _if_ they can handle any at all.
This flies in the face of previous promises where 'every key can handle an unlimited amount of accounts'. In my eyes, this looks like a big push towards phones as passkeys, and nothing else. Would fit with the Bluetooth sync strategy as well.
Someone using easy to guess simple passwords probably isn’t using a Yubikey and also likely has no interest in getting one at all. Those 2 categories are very different people.
Any form of authentication based on "something you have, but can also lose" is fundamentally broken. Either I'll lose access if I lose the device, or their superior security doesn't matter because the weakest link has to be somewhere else.
Just print backup codes or add second "drawer" key. The whole idea is great, but has several issues: very bad marketing, wierd configuration flow for average user, push for airdropable/cloud synced keys (this is a bad idea) and the most important one, the cost of keys.
Wouldn't this include passwords? People lose those all the time.
The proof will be in the pudding, but I suspect that widespread adoption of passkeys will make account lockout less common, rather than more, for the vast majority of people.
This raises a question for me. Why are hardware keys so limited in storage? How much extra would it cost to have a secure processor that could access a mass storage device also built in to the key. This mass storage device would of course be strongly encrypted by the secure processor with a key that would be erased at the same time everything else is erased.
Because secure tamper resistend storage is expensive.
I would even go as far and say from a security POV the best security key is the key which has 0 storage. Because in my experience any protocol which injects and stores a secure token into a security key/enclave/whatever instead of deriving it from shared secrets etc. has serious flaws. Sometimes it's fundamentally security flaws (like TOTP). Sometimes it's complexity flaws. Similar you don't really EVER want to share a secure key for HSK/2FA across multiple devices. It means if one device leaks it it's corrupted for all of them. Instead you want a separate key (oversimplified) on _each_ device. Login provider/server side wise the overhead for this is negligible in the bigger picture.
it's prone to MITM attacks when setting up (in a way you are very unlikely to detect if done well)
it's prone to MITM attacks when being used (in a way you are very unlikely to detect if done well)
it's MITM attack vectors are not just usable with "on the wire" MITM but can be archived with social engineering making them IMHO pretty bad
it's also prone to certain kinds of brute-force attacks in certain situations and protecting against them without making your login trivially DDOSable is very very hard
from a security POV it's better then SMS but still a pretty bad design
> Because secure tamper resistend storage is expensive
The storage for resident keys would not need to be tamper proof. All that needs to be tamper proof is the processor that operates on unencrypted sensitive data and the storage for the private keys of the device.
The resident keys would be encrypted using a device private key before being saved to mass storage.
I think this is a conscious design choice made to keep these devices as "dumb" as possible. As soon as you add storage, you start opening up the same surface for vulnerabilities as any other storage device, next comes compute and eventually you have a full fledged computer instead of a dumb yubikey.
I'm sure they can go up in storage, but the more you add to them the more you increase the chances of fault. And these things currently take a hell of a beating before they don't work anymore.
There is also something a bit more auditable about a smaller storage. Though, even the small sizes are probably pushing the bounds of what can realistically be audited nowadays.
It's less that they are limited in memory, and more so that they are designed to not have memory limits.
If you look at TPMs, basically each time you want to sign something, your input is the data you want to sign and a sealed private key. The sealed key is the private key that was generated by the TPM and then symmetrically encrypted with the key embedded in the TPM. You store the sealed key in your mass storage, and provide it to the TPM for each signing operation. This design allows you to have as many keys as your mass storage will allow you to save.
What you're talking about seems to be what the article would call a "non-resident" key, whereas this commenter is specifically asking about "resident" keys.
Or, if you think you are describing resident keys, then you need to reconcile,
> This design allows you to have as many keys as your mass storage will allow you to save.
with the OP: the article states that to be roughly "20", and people tend to have more than 20 logins, and that is the reason the person you're responding to is asking the question they're asking.
What I'm saying is if you look at the sequence diagram for the resident key, at step 3 there's no requirement to have the keys stored in the security key: you can save an Rp to token mapping in the client outside and it's still considered a resident key.
I think what I'm saying here is that resident means resident to the client, not necessarily resident to the enclave. I took a peek at the spec and they define resident keys as being part of the "client platform" which they take care to clarify as "A single hardware device MAY be part of multiple distinct client platforms" https://www.w3.org/TR/webauthn-2/#client-platform
Maybe even make it modular. Sell something that looks like a thumb drive but has a slot in the back that you can plug a small security key into (think something with a form factor like a YubiKey Nano).
When a security key is plugged into the slot the thumb drive provides storage for the security key and appears to the computer as a security key.
When a security key is not plugged into the slot the thumb drive functions as an ordinary thumb drive.
You would ordinarily keep the security key plugged into the slot, but if you ever decided you needed more storage you could by a bigger storage module, remove the security key from your old storage module, plug the old module into the computer, copy the encrypted files, plug in the new module, copy the files to it, then plug in the security key.
I don't know if this is the only reason, but mass storage devices seem to have a ludicrously unacceptably high failure rate and short lifetime to be something I key large potions of my life to.
Usb drives have ludicrously high failure rates because they're optimized for cost rather than reliability. Other forms of flash memory (eg SSDs) are quite reliable, despite having much more flash chips (and thus points of failure).
Mass storage for resident keys would not need to be written to often. Just when you create an account at a new site. I'd guess that would greatly lower the failure rate.
Is it even a good idea to use physical security keys as passkeys in the first place? Passkeys are meant to be a password _replacement_, and for that you probably want the 2-factor properties afforded by phones or desktops which usually require "something you know" or "something you are" to unlock in addition to the "something you have" afforded by physically possessing them.
IMO physical security keys are better left as second-factor authentication, to be used _in addition_ to passkeys in certain high-security contexts; particularly where resistance to cloning is a critical feature. Resident keys aren't necessary for that use case since by the time you get to the second factor step you already know the account you're trying to log into.
Further, the autocomplete functionality afforded by resident keys is important for the UX of passkeys in my opinion. I don't think it makes sense to sacrifice that in order to to retain backwards compatibility with a small number of keys that only security nerds use. (Though if there were a way to maintain that UX without using resident keys I'd be cool with that.)
Yeah, this is basically my take. At this point, the idea everyone is converged on is that you have a locally encrypted secure "vault" of some kind, that you can trust, and you need to verify identity with that system, perhaps with a password and perhaps with a key. It is easier to have some trust that your password manager of choice is more secure, rather than having to assume that every service in existence you create an account for is secure (or unphishable.) So, by the time you use a passkey, you're quite often in a more secure context where you've already established that identity: to your operating system, to your password manager that owns your passkeys, etc.
It also seems likely that places that didn't support hardware keys now or recently probably wouldn't have supported them in the near future. But the ROI for a Passkey solution is likely much higher since the buy in (just some software support) is much easier for people to achieve. Of course, this is only true for websites mainly; a Passkey is basically the equivalent of "Standardized SSH keys" for a website.
I see hardware keys as more useful for second factors like actually unlocking your vault with your passkeys inside of it, which might also want a password. I suspect hardware keys still have a bit of life left in them.
The Passkey login flow is actually super, super nice now that I can use it on GitHub, Gmail, etc as a primary method.
Password managers have already made passwords obsolete. I literally don't know any of my passwords except my master one. Passkeys are an insanely overcomplicated solution we don't really need.
Browsers just need a simple HATEOAS API for password managers to hook into, and web apps expose some HTML that triggers the browser. The password manager can then determine how to authenticate the user (however the user wants!), auto-inject the secret for that website, and the user is automatically logged in. E-mail reset if anything goes wrong.
From the "we're a website that wants super fancy security" perspective, I get that they want something more complex. But there should be levels of security that the user can opt-in to.
For example: most websites are fine with a simple password hash, assuming the password manager uses random passwords. Any website can implement that, every password manager can implement that, and it's better than 99% of regular users' password use today. So there's your baseline auth method.
Then if you want something like TOTP, OIDC, public-key crypto, etc the server can advertise it, the client can opt-in to it, and authentication can continue. But not every site needs to implement it, and not every user cares to use it. Basically, we don't need every site and every user to use the most secure methods. We just need to make it easier to get a baseline of improved security, and allow people to slowly opt-in to stronger security.
> Passkeys are an insanely overcomplicated solution we don't really need.
This is simply not true. WebAuthN is not overcomplicated needlessly (I wouldn't even call it overcomplicated, it's literally just a signed challenge/response dance). It improves on Passwords+2FA in a few notable ways:
1. It prevents shared secrets from traversing the wire.
2. It naturally enforces that users are all using secure authentication keys without password rules nonsense.
3. It kills 2FA by allowing Relying Parties to request user presence verification as part of the primary challenge.
4. It is origin-bound which mitigates phishing.
Passwords don't have any of these properties. And since your password manager handles the details for you, why wouldn't you want it to improve its implementation under the hood making things better for you with zero effort on your part?
I'm desperately looking forward to my password manager integrating support for Passkeys such that I can:
1. Back up my keys to paper and restore them from paper
2. Disregard/end-run around the "user presence verification" challenge if I want to.
I already deal with a ton of "acknowledge this push notification" or "type in this TOTP code" to verify, and automating every one of those interactions has lifted a huge amount of distraction and hassle from my everyday login-access dances interrupting me every hour or two.
I worry that more and more security people will make their orgs require authenticator attestation, which basically compares a burned-in cert against those certs blessed by FIDO. If too many websites submit to that stupidity, the idea that you can use your Bash-scripted password manager for resident key auth becomes a figment.
That is all way more complex than you're acknowledging. I bet you that for the hundreds of millions of dollars that will be spent every year on all of that crap, and all of the pain it will cause in a variety of ways, it will prevent maybe 1000 actual attacks globally per year.
Security does not need to be an arms race. Good enough is good enough.
Frankly I don't like the website getting to determine how I practice my own personal security: that's just the path of corporate lock-in, and it also has exactly one outcome - everyone will select either "maximum" because "that's secure" or "minimum" because "user experience".
It's the worst of both worlds (i.e. the insufferable thing banks do where they try to force you to type in your password with the mouse).
When I use a Yubikey for passwordless authentication (FIDO2), it challenges me for a PIN before asking me to touch the device. If I give it too many incorrect PINs, the Yubikey locks up and requires a device reset, which invalidates all previous registrations to use that key for authentication. It doesn't seem like a big deal if someone steals my hardware token.
It means though that your secure hardware token has a reliable way where the secrets all self-destruct. That someone can easily do if they get even brief hardware access. For people who have a problem keeping sufficient backups (almost everyone on earth) this seems like a horrific blocker, a show stopper for this entire intiative.
I personally think these things absolutely should be able to be exported & backed up separately. Many people guffaw that now the device isn't secure. But I just don't think I could realistically adopt nor do I expect others will unless users get some better affordances, unless we get some capability to manage trust as we see fit, not just be pushed top down into someone else's desired much narrower security behavior.
(Thankfully it seems like there is building interest in exports & portability, particularly as the OS/browser powered PassKeys arrives.)
> I personally think these things absolutely should be able to be exported & backed up separately.
I agree. The usual response is that you don't need to do this because you can have multiple hardware keys that authenticate to the same services, so you can store one as a backup.
But managing that sounds like a real pain in the butt to me (honestly, the entire passkey system sounds like a real pain in the butt to me -- but that bit particularly so).
But it looks like the major companies recognize that this is an issue and won't be requiring that part.
It'd be awesome if you could ask to enroll a variety of other devices all at once, without having them on hand.
Requiring ongoing physical access to your crucial backups to do any account enrollment or changes seems like a way to make sure you have your crucial backup way too close to potential disasters. Ideally I wanty backup keys many states away from me. But then I can enroll them!
But it feels like there could be some kind of pubkey for those keys that I could also enroll at the same time as I'm getting my first device.
Except these devices don't have just one pubkey cause that wouldn't be secure. Maybe they pre-make & share 20 keys with a peer device or something. Somehow though data needs to at the end be able to get into the backup/other device too though. Ugh it's wild.
That would certainly be very convenient, but it would also retain/reintroduce several of the security weaknesses that passkeys are intended to mitigate.
The reality is that security is really, really hard. And it remains as true as ever that increased security comes at the cost of decreased convenience.
My personal attitude is that I make different security/convenience tradeoffs for different things. I do have and use hardware keys for very sensitive things. But they're rather inconvenient, so I don't use them for most authentication purposes. Does my account here on HN really need to have the best possible security? No, it doesn't.
So, in my opinion, both passkeys and the traditional username/password mechanism should be supported for most of the web. Which is likely how it's going to be for a long time.
The ability to do auto-enrolment would break the per-site uniqueness of credentials (which makes them pretty strongly phishing resistant under most sane threat models where the browser isn't totally compromised)
Right now, the public key from one token is unique to that site (and specifically your registration attempt with the site, so you can have multiple unlinkable accounts using the one FIDO2 key). If you could do an offline and remote enrollment, you'd need to work with a single static public key (and corresponding private key) for all sites.
Despite all this, I still think this is a use case that's important - both the ability to have an offline backup key (even pairing all your tokens together at setup time to use a common internal root AES key wouldn't help as there's an anti replay counter in FIDO authentications from what I recall), and the ability to use passkeys portably across vendor ecosystems, without relying on a single ecosystem as your trust root.
There are some assumptions in your explanation that I'm not following.
For one, you could generate a new unique private key for the site login, and then encrypt that with the other device's public key. And you could sign it with the active device's key, so that third parties can't try to send you enrollment data.
If desired, you could handle this encryption in a way that makes it impossible for anyone else to tell what keys are being used.
Or instead of generating a private key, you could securely send a single-use token to the other device, and that token allows it to register.
Either way the fixed public key would only be used once, and only directly between the two devices. It doesn't get tied into the site authentication process. And you could replace or augment it with a symmetric key that's unique to that device pair.
> would break the per-site uniqueness of credentials
It wouldn't break things as I've described it. Each device would have a handful of pre-negotiated single-use public keys for the other device it could enroll with.
I tend to think there's no blockers and I just invented a better+obvious flow.
> you can have multiple hardware keys that authenticate to the same services, so you can store one as a backup.
The most common case where people are willing to spend $50-100 for extra security is businesses securing their networks. If you lose your passkey just stroll on over to the help desk and show your id and they'll enroll a new one for you.
If you're an individual using a passkey with free online service, like Github, just enroll a TOTP key first and print out the QR code. Then if you lose your passkey you can use the QR code to get access to your account.
> ... you can have multiple hardware keys that authenticate to the same services, so you can store one as a backup.
Just some problem:
As a backup, I would prefer to store it _away_ from my main hardware key. Everytime I sign up a new service, I need to go fetch the backup and update it...
I struggle with this on a lot of 2FA. I change my phone every 1 or 2 years when I get an upgrade and the 6 months after I end up having to keep my old phone around to handle the 2FA apps that can't be ported over, it is incredibly annoying.
For me allowing a weak 2FA that moves you from the pool of people that can be trawled to the people that need to be specifically targeted is a huge improvement, but my fear of losing access to critical systems because I lose my phone fills me with dread.
I use Aegis Authenticator on Android which does encrypted cloud backups, so changing phones isn't much of a problem.
I also only keep critical accounts there, the rest goes into Bitwarden. I realize this isn't as secure, but with those accounts I wouldn't even bother with 2FA otherwise.
I was forced to create my first ever Microsoft account to be able to back up my Microsoft Authenticator app. I struck a small blow in return by using a very rude email address for it.
Yubico has said that they're working on a standard to export passkeys and that future products will have that feature once the standard is decided upon.
That's not how my YK works. When I go to a new computer and login to my Google account, it asks me to insert it and press the button. Did I configure it wrong?
If you're only using it for two-factor authentication, you don't need a PIN. But when I tried to registered mine as a passkey (passwordless authentication), my browser prompted me for a PIN. I didn't have one set at the time, so it kept rejecting whatever PIN I gave it. I had to use the YubiKey Manager to set a PIN before I could register it as a a passkey.
I use YubiCo Authenticator for TOTP via my YubiKey, and have a PIN setup due to that. Quite nice really, I imagine it's the same PIN you're talking about? I've not used it as a passkey yet
Yubico sells Yubikeys where are smartcard devices loaded with several apps (keyboard emulation OTP, GPG, PIV card, and FIDO 2).
They also sell cheaper security keys, which are purpose-built for FIDO 2 only.
When someone says they are using a passkey with a Yubico device, they are talking specifically about the FIDO 2 functionality. This does not (at least currently) support import or export - partially because they want these devices to be sold in regulatory environments where hardware-bound and non-cloneable credentials are required.
Are you sure you have a YubiKey (e.g. a "5 Series"[1]) and not a YubiCo "Security Key"[2]? The latter is a less expensive device with less functionality[3], though still good for arguably the most common 2FA situations.
Yes, you need to use `ykman` to set a PIN. This also allows some services (really only Microsoft Accounts right now) to use "passwordless".
The idea is you register 2 or 3 passwordless keys on important accounts. Keep one in the machine, one on your physical keychain, and one in a remote location.
It's optional and can be required by the service. Services like Microsoft that use security keys as a single factor rather than as a MFA are more likely to require it.
If you think about it, the core problem can be described as "authentication of the biological being with an electronic system".
When passwords are used, the authentication interface is a keyboard and you don't have any actual guarantees that the person typing the password is the person who claims to be. The passwords could have been extracted in so many ways because it depends on easily transferable knowledge.
Moving the authentication interface to device2device is actually much better, you no longer assume that the easily transferable knowledge was not transferred. Instead, you assume that the biological being is capable of keeping track of the authentication device and people are naturally good at it.
You can increase the number of authentication channels to tighten it up a bit, you can restrict the authentication of the biological being with the device(FaceID) which will be used for authentication with remote systems but at the core I think it feels right to assume device(phone, key etc.) means the person.
It's also quite a human thing to do. At home, we share not only the Netflix password but one of the credit cards. For practical reason, one credit card stays with the spare keys and when there's something to buy for the house anyone can grab that card and use it. We trust each other that the card would be used properly, everyone knows the pin code but that's rarely needed since contactless payment is the norm anyway. It's much more natural than keeping track of the expenses and then pay each other the outstanding amounts. However, It's probably illegal and if the bank finds out about it, they will cancel the card.
IMHO, the IT systems desperately need to approach human behaviour by working in analogous ways with the real world. Since I'm involved with IT systems I don't struggle most of the time but people who are not that tech savvy are having hard time figuring out daily stuff like What is the iPhone's password for, What is the iCloud password for, what is the Gmail password for, why I need to enter a code in WhatsApp etc.
Actually, I think I struggle too - I never came along to understand Mastadon. I'm prbably defenceless against phishing attacks on Mastadon, I will type whatever the screen tells me to type.
> IMHO, the IT systems desperately need to approach human behaviour by working in analogous ways with the real world. Since I'm involved with IT systems I don't struggle most of the time but people who are not that tech savvy are having hard time figuring out daily stuff
I'm pretty much the website key master for everyone in my family. Since nobody else is "in computers" they really don't have a clue about what things need passwords and why. They would NEVER voluntarily complicate their lives with 2FA or even with a password manager. If it wasn't for me, they'd just use "hunter2" and share it across every single device and service they use. If I told them they couldn't just type in their Netflix password when Gmail was asking for a password, they would just look at me exasperated, like I was making their lives difficult.
The security community really needs to get a grip and start designing systems that are compatible with the extremely low-tech-interest population if we even have a hope of securing systems. If I knew what the solution was I'd be rich.
> The security community really needs to get a grip and start designing systems that are compatible with the extremely low-tech-interest population if we even have a hope of securing systems. If I knew what the solution was I'd be rich.
Most of that population seems to do fine managing house keys, car keys, locker keys, etc.
You sure about that? I inherited what feels like 1,000 keys when my in-laws passed away. Who the hell knows what any of them are for, and they sure as hell didn't.
I can't imagine that anything less than subdermal implants will be reliable, for some people.
If the implant fails, you can just go back to the government office / mega corp, show your DNA, and get a new one.
On further reflection, person a) has an evil twin who steals their identity, and person b) doesn't trust the government / mega corp. Back to the drawing board.
Because they don’t really have any other choice. You get a key with the lock. Even if they all happen to be the same blank, it’s substantial work and expense to get them all keyed alike for most people.
Maybe that’s our solution right there—when you register for a service instead of relying on users to select a secure, unique password we should generate a “correct horse battery staple” and only support rerolls, not setting arbitrary passwords. Guaranteed some minimum level of safety and complexity and no reuse.
> Because they don’t really have any other choice. You get a key with the lock. Even if they all happen to be the same blank, it’s substantial work and expense to get them all keyed alike for most people.
You have lots of other choices. You could use combination locks, time locks, biometric security measures, paired keys, etc. The simple key-based lock seems to be particularly simple and accessible to consumers.
...and there are, and they're remarkably similar to what you do with Yubikeys: you have extra keys, and when you lose one, you uses the other to get in, and then you invalidate the old keys (although in the physical world, this means getting a new lock and a new set of keys, instead of just getting one new key and removing the lost key as a valid key).
True but online accounts are usually in the dozens for most people so thats definitely more of a burden. Also, its a mental load while physical keys carry the "password" physically.
Is that true though? AFAIK they sell CC info for pennies.
The good thing about the physical device is that you can easily tell if it's stolen.
For passwords, there are numerous services that keep track of the leaks and even Apple has incorporated that into their password manager but it all depends on mass leaks to work.
> it's probably illegal and if the bank finds out about it, they will cancel the card.
In the US, anyway, this isn't illegal unless you have to sign something and sign someone else's name. So just sign your own (nobody actually checks signatures).
It might be against the CC issuer's terms of service, of course, but that's a whole lot different from being illegal.
IIRC if you give someone your card your authorizing them to charge the credit account. The bank is totally fine with this as long as you pay the statement.
All of the following are routine. We will name our example person "Bob".
1. Bob owns an important item. He believes that he knows where it is. He is wrong.
2. Bob owns an important item. He is well aware that he has no idea where it is.
3. Bob owns an important item. He knows where it is. He is right about where it is. Unbeknownst to Bob, other people frequently borrow or otherwise meddle with his item.
4. Bob has taken his important item with him, for security. Unbeknownst to Bob, it fell out of his pocket an hour ago.
5. Bob used to own an important item. When he cleaned his house, he confused it with a different, unimportant item, and he threw it away.
> Passkeys are meant to be a password _replacement_,
No. Passkeys parasitize on FIDO2 U2F standard, that was developed to be (as the name implies) the second factor. Resident keys are meant for on-device 2FA with PIN, a functional replacement of smart cards.
Someone (Apple maybe) thought it’s a good idea to consider WebAuthn being good enough to be the only authentication factor (no resident keys, no hardware bond, keys are roamed via iCloud) but TouchID/FaceID protected on device. And they branded them as passkeys.
> and for that you probably want the 2-factor properties afforded by phones or desktops which usually require "something you know" or "something you are" to unlock in addition to the "something you have" afforded by physically possessing them.
You don’t. 2FA is not a goal on itself. The goal is to have user authentication that is protected from phishing, brute force and credential stuffing, and also not as hard to implement as smart cards.
FIDO2 does that. The problem with Apple’s, Google’s or Microsoft’s implementations is not that they are less secure on a protocol level between the authenticating site and the user’s device, that’s exactly the same protocol. The problem is that the site has now to trust user’s personal account in one of these platforms and that the user did the right thing and also the platforms will always be doing the right thing - e.g., they will handle attacks on user’s personal account properly.
Microsoft Live allows me to sign in to live.com with nothing else but my Yubico Security Key. That's right, I don't even need to know my username; I just plug in the key and touch it and I'm logged in. And when I write "I" you should read "anyone who has physical possession of this key".
I think that's astonishingly bad opsec for a Big Tech cloud service. If I were a sane person, I would deregister that key as a FIDO2 device, but I guess I'll be OK for now. I shouldn't have posted this in public. This comment will self-destruct in T minus 10 minutes.
Microsoft is also the jerk who won't let me use the self-same key for logging into my Windows 10 Pro notebook, no how, no way. Windows Hello does not play nice with Yubico. My notebook has no fingerprint reader, and no infrared camera, so the Windows Hello alternatives are slim pickens.
Yes, that’s resident (or “discoverable”) keys the article author is talking about.
You don’t have to do it this way. I configured my Yubikeys to be the second factor and not to use resident keys. It’s possible, although I don’t know if Microsoft allows users to roll back from “passwordless” and discoverable keys.
I want to state it explicitly: FIDO as technology allows either. It’s particular platform choice to go with discoverable keys.
> Yes, that’s resident (or “discoverable”) keys the article author is talking about.
No, I said I'm using a Yubico Security Key. This is not a Yubikey. This key has no storage. How can it possibly store resident keys? The YubiKey Manager app can't even connect to this key. It's very basic, it has no TOTP slots, it has no configuration, it only does FIDO2. How would resident keys get in there in the first place? The article cites a strict limit on the number of slots, but it has zero slots.
You can use the Yubico Authenticator app's WebAuthn feature on the desktop to see resident credentials on their Security Key product, same thing with Chrome/Chromium's security key settings pane.
Nope. I have Windows Yubico Authenticator v5.1.0, and with the Security Key plugged in, all screens blank.
In Chrome 114.0.5735.199 on Windows 10 Pro, there is no "security key settings pane". The closest thing available is "Privacy and Security -> Security -> Manage phones (control which phones you use as security keys.)"
However, in terms of resident credentials, I thank the GP and I stand corrected, because Yubico's own specs say that this key sports 25 slots. I wonder how many are currently in use, and which version of the CTAP protocol it is using...
So I just tried this with a blue Yubico Security Key with 5.4.3 firmware using Yubico Authenticator 6.2.0 on Linux, and I was successfully able to manage my resident credentials using the Authenticator after setting a PIN and saving a resident credential via https://webauthn.io.
I'd check your firmware versions, update your Authenticator, ensure you have a PIN set and ensure you're correctly saving a resident key on your device when registering with a service.
For Chrome, a visit to chrome://settings/securityKeys[1] should do it, but I just tried it in a Windows VM and it is not present in the menu, while it is present on Linux and macOS.
> It’s possible, although I don’t know if Microsoft allows users to roll back from “passwordless” and discoverable keys.
I don't know about Microsoft specifically, but it's possible to register the same FIDO2-capable security key with a service as both a passkey and a U2F token.
U2F authenticators and the U2F protocol cannot support passkeys. A passkey is a discoverable credential which supports user verification. U2F supports neither discoverability nor user verification.
Passkeys as a user-facing term is meant to describe a user experience. Second factor authentication using U2F is a different experience.
> The problem is that the site has now to trust user’s personal account in one of these platforms and that the user did the right thing and also the platforms will always be doing the right thing - e.g., they will handle attacks on user’s personal account properly.
That is in fact how passwords work today. You can't tell if my password came from my head or from a excel spreadsheet printout I carry around in my wallet; from a cloud synchronized password manager or if I use the same password for every website which will accept it (otherwise, I will add exclamation marks to the end until it does).
I've been trying to wrap my head around this and my layman understanding is that there's an assumption (but maybe not baked into any requirements/standard) that use of the hardware key is locked behind either a biometric check (FaceID/TouchID/etc) or password. In other words, there might be an implicit second factor baked in to the passkey itself.
A passkey is a discoverable credential (meaning - a website can ask the system for it without knowing who the user is first) with user verification (meaning, it can ask the system providing the passkey to verify the user).
For a platform like a mobile phone or laptop, this user verification might be a biometric or a system password/pin confirmation.
For a security key fob, they may have a fingerprint reader or a pin entry pad. Or, they may ask the browser/phone/laptop to prompt for PIN entry on their behalf.
One could imagine a wearable using a biometric scan, or even monitoring for continuous wear and only asking for a confirmation gesture/tap.
WebAuthn is an API to talk to authenticators, and authenticators are a box which could hold anything from a single factor to a full authentication process.
When using a resident/discoverable credential the authenticator is supposed to authenticate the user (using a pin, biometrics, etc.) This fulfills the multi-factor requirement. All passkeys/webauthn credentials are something you have and you can use a something to know/something to are to unlock the credential stored on the authenticator.
Platform authenticators have made it more obvious that some people took the multi-factor model as some immutable truth of the universe.
The modeling of authentication techniques as factors shows the strengths and weaknesses of the categories. The purpose of 2FA was to pitch instead to use authentication processes that counteract the weaknesses through layering.
Platform authenticators aren't just providing an authentication technique - they are a user-supplied authentication and recovery process.
Even understanding the entire workflow of that process, you may not have the ability to retrofit that into a _larger_ process to meet your regulatory and security requirements. But that workflow is actually per vendor, per device, configurable by the end user, and evolving over time.
This has been an ongoing problem for ages, because the 'knowledge factor' was actually often something the user didn't know, but something provided by a software agent (password manager) which had its own configurable authentication and recovery processes. It just eventually got ignored as people shifted to thinking of the second factor as 'the thing that makes up for all possible weaknesses of the password'.
IMHO this is why passkeys are pitched as a replacement for passwords, e.g. as a knowledge factor. It may eliminate your site's need for another factor if you were mostly concerned about phishing. It stops you from needing to use breach lists, and limits the impact if your credential table gets exposed.
It isn't a great fit for regulated/secure environments, which may still need to do all the same additional factors for risk mitigation or compliance. This is a very complex problem to solve, though - platforms are not going to want to act against their users' expectations, such as losing all banking credentials when you get a new phone.
> Passkeys are meant to be a password _replacement_, and for that you probably want the 2-factor properties afforded by phones or desktops which usually require "something you know" or "something you are" to unlock in addition to the "something you have" afforded by physically possessing them
Yes because the keys have a PIN just for this usecase. Similar to the ATM card or SIM card you already know
The impression that I get though is that the PIN's are typically short (especially if we have to enter them every time it is to access the key). Now, how physically save are hardware keys that the actual private key can't be extracted from them? In contrast to an ATM or SIM we essentially rely on the device to enforce the "max number of attempts", not an external entity.
Once the key is extracted brute forcing the PIN is not a problem, because it likely is going to be simple. Unless somehow the devices are going to enforce long PINs.
> Passkeys are meant to be a password _replacement_, and for that you probably want the 2-factor properties afforded by phones or desktops which usually require "something you know" or "something you are" to unlock in addition to the "something you have" afforded by physically possessing them.
Hardware security keys that implement FIDO2 UAF can be protected by PINs/passwords or biometrics, the former being "something you know", and the latter "something you are", with the security key itself being "something you have".
Until there is something like pre-registration ("here are all my keys from all my devices, trust them all" - not possible with current standards) mechanism - I suppose, yes?
I don't really understand if there's any other way to make all this work if not for portable authenticators. How one is supposed to log in from a different machine if it's from a different ecosystem that doesn't have the original passkey (e.g. log in on an iPhone if I've signed up from a non-Apple desktop computer)?
For the other direction (phone providing computer access), there is a hybrid flow. You select an option like 'use passkey from another device', and it will pop up a QR code. Scan that with your phone/tablet, and it will provide the interface to confirm and authenticate on your phone. That then lets your computer in.
Some sites may have flows to detect you used a credential from another device when your local device supports passkeys, and just prompt you if you want to register a second passkey to make things easier in the future.
There's nothing that prevents a computer from scanning a phone-displayed QR code to work in your given direction, except that it is not what a user would expect.
Dashlane and 1Password have support for providing passkeys via browser extensions, which provide different sync 'boundaries'. Android and Apple OS's both have beta API to provide these apps the ability to plug in at a system level. It's feasible that even Apple/Google could publish apps that use these API on one another's platforms.
> You select an option like 'use passkey from another device'
Is this a part of any standard? It most certainly not a part of any Webauthn spec, and sites I've seen that mentioned Passkeys did not offer this option.
This is implemented in various platforms/browsers, deployed over the last 10 or so months. I believe Microsoft may have added this in the latest Win11 previews.
Realistically, I don't think this will completely replace Yubikeys, nor do I think that only "security nerds" use them.
In reality - the majority of leading organizations use Yubikeys to secure authentication across their company. While it's likely not as common for consumers, it is probably the most trusted solution in the enterprise today.
I think the key (pun not intended) differences are that phones are something you'll always have on your person, Apple/Google will allow it to sync across devices, and phones require a pin/biometrics to authenticate with them.
SIM swapping lets you take over a phone number. It does not let you clone the keys stored in your physical phone’s secure storage / enclave / key storage.
So SIM swapping makes SMS verification vulnerable (that just depends on controlling the phone number), but doesn’t fundamentally affect iPhone/Android passkeys.
There are two separate problems - the first one is to make sure others don't have the access you don't want them to, while the other is to make sure that you/others have the authorized access.
SIM swapping means someone else might be able to have my phone number, but I'll eventually get this number back via legal means. So the number itself is something I own (at least in my country). Now if a site assumed this and phone numbers were meant to be constant, it'd mean I could always get my account back.
But of course this depends on which problem I consider more important. It's better to lose access to my FB account forever than to allow someone to gain access to it for a moment, because that might cause harm.
Similarily, it's better to lose access to my bank account until I have to visit them in person than to lose all the money, but in this case the weakest link is probably not the key itself.
I'm quite concerned that phone numbers I don't own or addresses I no longer live at have legal "owners" and their friends/sublets who could decide to impersonate me with accounts that still have these old details, and its kind of hard to separate that from my new details having been the illegitimate hack.
I think most people wouldn't risk that unless they are pretty messed up and an average legal system to deal with that unless it is pretty messed up. A past residence in a mediocre city in the US provides all the crack heads needed for such a torturous comedy.
Passkeys combat phishing and sell mobile devices that have secure enclaves, tho the mobile operating software is apparently syncing the master key between devices enrolled in the same subscriber account. However, during MOST urban assaults, phone theft/damage is among the first thing to happen. It’s why all of this phone-as-token crud combined with FaceID are bad ideas that put convenience above security. The government should be pushing everyone into a federal SSO so that this is not needed and so they don’t need to worry so much about encryption.
I'm not sure that a second factor was ever the goal of 2FA, but rather the desire was a key that users couldn't pick themselves (because they use their birthday, they tell example.com their Google password every time they log in, etc.)
That said, I don't feel great about letting security keys be the only factor. They can be stolen. So can phones, but phones make you unlock them before they'll be your password. (Windows Hello is the same thing.) That seems like the right balance to me. Your phone can authenticate that you are you; a USB dongle can't.
Modern security keys (made in the last 5 years or so) support CTAP 2, which will support setting and using a PIN even if there is no hardware keypad. The client system will prompt for the PIN before letting you use the credential.
Chrome I believe will walk you through setting up a PIN the first time if a site requests user verification, while last I checked Apple platforms require it to already be configured.
i think they could be great for websites i don't really care about and already use weaker passwords. but for important sites where security matters? nope
Passkeys are unphishable and can't meaningfully leak credentials in the case of a hack, nor can they be reused by design. For "important sites where security matters" they are literally better in every way than a password, it doesn't matter how weak or strong. You can use a pure software solution, and soon probably even your existing password manager, to handle them. Again, you should think of them as replacing passwords. You can still enforce post-authentication requirements like SMS or calls to known numbers, bank account deposits, magic follow-up links in a confirmed email, push notifications, etc.
SSH keys are the best analogy. "SSH keys are great for useless servers, but for important servers? no way!" No! That's exactly where SSH keys are most useful. And you also happen to encrypt your SSH keys locally with a password, don't you? This is the exact same principle, but applied to arbitrary websites. Nobody goes around randomly generating login passwords for SSH'ing into each and every server they use, and then pats themselves on the back.
> Passkeys are unphishable and can't meaningfully leak credentials in the case of a hack, nor can they be reused by design.
Let's assume a "passkey device emulator" written in software; quite realistic IMHO for someone to use, considering the cost of hardware authentication devices (phones, YubiKey etc.)
If someone using such emulator gets hacked and has their passkey emulator data stolen, is there anything preventing a credential leak?
The software you're describing is called a "Password Manager", and several do support passkeys already in newer versions. There's no real "emulation." 1Password 8 supports them just fine, your browser has APIs so third-party software can integrate exactly for that. So, the answer to your question is pretty much "exactly the same scenario as your password manager getting leaked", which is basically unsurprising and already well understood when you frame it this way, I think.
The particular case I was referring to (and probably should have been clearer about) was when a website operator gets hacked; in that case the only information an attacker gains from your user account is a public key, which isn't of much use. But like, that's actually a major issue in practice, because the value of hacking a service operator is often far greater than just one user. That exact scenario was one of the motivations for using password managers in the first place, too, to mitigate operators getting hacked and common passwords getting reused between users, thus turning a single compromise for many users into multiple compromises for many users. So, it all has come full circle in a sense; now we've finally recognized that instead than shoehorning passwords into becoming psuedo-random strings that might as well be base64 encoded bytes from /dev/urandom, you might as well go "all the way" and just get those raw bytes from /dev/urandom directly and then use them as key material for a public key exchange.
Again, the best analogy is to just imagine that you used SSH keys to log into a website. That's all this is. It's software. Then you remember: oh yeah, SSH key synchronization and enrollment across machines sucks ass, and normal people would hate doing it. Hey, you know what, we already use passwords to encrypt SSH keys -- so what if we added a storage synchronization layer between your machines to keep those SSH private keys synchronized, encrypted with that password, and stored using $FAVORITE_SERVICE_PROVIDER? That's pretty much it, in a nutshell. You just reinvented modern Passkeys. Most of the threat models at that point are well understood: what service provider or software to choose, how secure is the local encryption, should you use two-factor authentication to further improve unlock safety, etc.
> when a website operator gets hacked; in that case the only information an attacker gains from your user account is a public key, which isn't of much use.
How is that different from situations where a website gets hacked and all the attacker gets is a well-hashed version of a unique password? In either case it isn't doing the attacker any good.
It isn't, if the website implements it correctly, and the user uses a strong password, and there's a salt -- then yes, probably, they aren't getting much information. But if you could always rely on all of those, a lot of these problems would not ever be problems; alas, here we are, in this particular world.
And if passkeys were only equal to passwords in practice, it would still -- IMO -- be worth upgrading to passkeys because they're, for this case, a better foundational basis to work around (public key cryptographic authentication versus sharing symmetric keys), and less error prone for users and operators. But in practice they are aiming to actually be better, faster to use, and more secure since every Passkey implementation is basically designed around syncing (iOS 17 TBD) and device authentication, and they are phishing resistant, which seemingly nothing else can hope to solve so we just gave up on solving it and don't ever mention it because it's the users fault that they did it. (No, seriously, did we all just give up on that entirely?)
I will keep invoking the SSH key analogy, here. Very few people are paranoid about SSH keys being some weird psychological "trick" to take Freedom Loving Passwords away from them or whatever (not referring to you), and most people aren't splitting hairs over "Well, you know, if /etc/shadow and /sbin/login the system is set up correctly, and the machine is secure, then there's no real point to using an SSH key, because my password is safe on disk, and you can just trust that." OK, and? It doesn't matter whether you're logging in as root or a normal user, on your VPS or a friends box. People just use SSH keys instead. Everything works around SSH keys today. People do not want to deal with your secret key material. Passkeys are in many ways just SSH keys for the browser. There really isn't much here to think about when you look at it like this, because the whole basic idea has been around for decades now.
With passwords, the user choosing a unique password or the site choosing to use a recommended process for hashing passwords is proper hygiene, but requires knowledge and is a choice.
With passkeys, there is no opportunity to have bad hygiene. The user does not pick a password. The site does not have secrets to store unprotected.
If the device is exploited, they can also install a keylogger and steal regular passwords.
I think this would be where hardware based authentication via TPM or similar would be useful. This would allow the device to be taken over and the private key material would still be safe.
The secrets in the password vault can be used to authenticate and change settings in various trusted systems, like configuring call forwarding.
Cloud-based backups or the local filesystem could theoretically be inspected for software TOTP secrets.
Responding to the potential for bad user choices and full compromise quickly gets you to the point where your options are separate hardware or requiring in-person confirmation. NIST 800-63-3 AAL3 is probably most appropriate to look at if your risk profile mandates this.
for my important accounts the password is long, unique, and not recorded anywhere, that is one way that passkeys are not better. there is literally no credentials to leak until i go login and type it where passkeys are recorded somewhere? otherwise how would they work. someone gets my private SSH key that is a bad time (which is why we password protect them, or at least you really should be)
to follow the ssh analogy, you (should) only use SSH keys to gain access to a
unprivileged user account at which point you elevate permissions via sudo and another factor (password/MFA) and really theres an argument to be made the unprivileged account should have MFA for login as well.
nobody puts their ssh public key in the root account of a server and pats them selves on the back that its secure so why would passkeys be any different for accounts you truly need to be secure?
You seem to be making up a bunch of scenarios that aren't really relevant (what if someone did this and that with sudo, what if the bytes were stored here). You don't want to understand the actual security model, which is fine, but only on Hacker News can someone say with confidence "actually unphishable public keys that can't be leaked, are not good for security." Again, you might as well be arguing against SSH keys. That won't get topped for a while.
I understood the parent poster to be saying that since his passwords are unique and are not stored anywhere, then if his device were to be compromised, the attacker could only steal a password once it is manually entered, in which case it wouldn't automatically compromise his other passwords.
Conversely, if he were to use a password manager on his device to store passkeys, the attacker could compromise all his passkeys once one of them is used.
Admittedly, it is an unusual use case (I mean, how do you generate and remember unique, sufficiently long and random passwords without storing them anywhere?) but I can see how passkeys could be worse for him if this is really what he does.
I don't think a compromised device, and thus access to local data and potentially your password manager, is such an unusual situation, but at that point it is true you do have bigger things to worry about. A device like a computer is also far more likely to get compromised then a phone.
that all said its fairly easy to remember a 20-30 length unique password if you use a passphrase and only have a couple places that are "that important" such as banking, broker, icloud, email, etc. everything else can go in keychain
> I don't think a compromised device, and thus access to local data and potentially your password manager, is such an unusual situation
Right, but what I meant is that it's unusual to have unique passwords for each service *and* have them memorized/not stored anywhere (well, sufficiently long and unique that if an attacker knows a few of them, it doesn't help him guess the others).
That's not what the vast majority of people do.
> that all said its fairly easy to remember a 20-30 length unique password if you use a passphrase and only have a couple places that are "that important" such as banking, broker, icloud, email, etc. everything else can go in keychain
Many of these services don't allow such long passwords where you can use passphrases. For example, both of the banks I use (in two different countries) only allow a fixed size 6 digit numeric password. Somewhat strict password length requirements are not very unusual.
While funny, the problem with this xkcd, besides the password length problem, is that 1000 guesses per second is way, way, way underestimating how fast you can crack passwords nowadays if the service uses password hashing algorithms that are still commonly used. Billions to hundreds of billions of guesses per second is more in line with the right magnitude, given a couple dozen GPUs which can affordably be rented in some cloud service.
When you need to memorize passwords or passphrases for two to four services, you're already in the same entropy requirement ballpark as having to memorize one bitcoin seed (i.e. 128 to 256 bits, depending on how paranoid you are) and therefore you run into the same dilemma: if you can memorize it long-term, it means you don't have enough entropy, and if you have enough entropy, it means you can't memorize it long-term (easily/reliably).
Which is why all but the most clueless or the most paranoid (or those who can afford to lose it) store their bitcoin seed somewhere more permanent than their brain [1] -- unless, say, you only do it very carefully and only temporarily, e.g. if you need to cross a border with a large amount of BTC and you really don't want to attract attention, no matter how scrutinized you'll be (and even then it's probably much better to store the seed somewhere in some creative and imperceptible way).
[1] Bitcoin brainwallets were a lot more popular many years ago, but nobody recommends them anymore due to their severe problems: https://en.bitcoin.it/wiki/Brainwallet
and thats fine, but some of us do for the sites that are important, and that is better then storing something in a password manager weather it be passkey or password.
> For example, both of the banks I use (in two different countries) only allow a fixed size 6 digit numeric password. Somewhat strict password length requirements are not very unusual.
that is a problem with the banks, mine is happy with my 30+ one and has MFA. Banks that can't even support a decent password are unlikely to support passkey anytime soon
> way underestimating how fast you can crack passwords nowadays if the service uses password hashing algorithms that are still commonly used
if the provider (bank) is compromised and salted passwords leaked it doesn't matter, they have already compromised the bank and your account. And i still do not think you can quickly crack a password such as "This15aVERY!!securepasswordEH?!!?"? i could be wrong here
> if you can memorize it long-term, it means you don't have enough entropy, and if you have enough entropy, it means you can't memorize it long-term
not talking about bitcoin seeds here, just accounts.
like i'm not arguing against passkeys just that they have the inherent flaw of existing on a device/somewhere vs something that doesn't.
I agree with your post. I'd just like to add a couple of comments:
> if the provider (bank) is compromised and salted passwords leaked it doesn't matter, they have already compromised the bank and your account.
It matters if the only thing that was leaked/compromised was the hashed password database, but not much else.
In fact, the ones who leak the hashed passwords may not be the same as those who hack your accounts, just look at all the leaks tracked by https://haveibeenpwned.com and consider that anyone could download those hashed passwords and crack them.
> And i still do not think you can quickly crack a password such as "This15aVERY!!securepasswordEH?!!?"? i could be wrong here
You could be right, but I wouldn't be surprised if you were wrong here...
Decades ago, the "John The Ripper" cracker was already very good at cracking these kinds of passwords (when CPUs were single core and much, much slower, and it wasn't even possible to run software on a GPU).
John the Ripper was already capable of using many extremely extensive word lists (in different languages) to quickly run through many such passwords, and simply mutating the password by using l33t speak and adding a few numbers, symbols or using mixed case are extremely popular password strengthening techniques which the software was still capable of cracking very quickly, since that doesn't add much entropy.
Although at the time it probably couldn't crack such a "long" password, I'm sure this type of software has become better and the hardware has definitely become many orders of magnitude faster and more parallel, so I wouldn't be surprised if the example you mentioned is well within "can crack quickly and relatively cheaply" territory, even when using salt, as long as the service is using a traditional password hashing algorithm (and not one of the newer compute-hard or memory-hard KDFs).
I mean, to have an idea of the magnitude of the problem, the brainwallet cracking stories of a decade ago were already pretty mindblowing (even considering that it's a "no salt" scenario).
I don't remember the exact details, but I think there were cases of people using an airgapped computer to compute the SHA-256 hash of some obscure passage of some obscure book or poem in some obscure language and the bitcoins were stolen within seconds of being transferred to these wallets (although, yes, due to the "no salt" problem, it stands to reason that all of these wallets were pre-computed by the attacker).
But still, personally I'd feel a lot more comfortable just using and storing a completely random password with a perfectly known amount of entropy, just to be safe, and deal with the compromised device problem in some other way (such as having a dedicated password management device, like a hardware wallet, if you're really that paranoid).
you were the one that compared it to ssh keys and again: you do not secure root accounts with an ssh keys. Or are you arguing that you should just drop public ssh keys into /root and enable root login?
so how are passkeys are different then ssh keys? there is a private and public key, and if someone gets your private key they get access to everything it unlocks.
they can be sync'd between devices (ie from a secure to compromised), exported, etc exactly like a private ssh key
also i'm not here arguing against passkeys - just pointing out that a long, unique password used in 1 place, that is also not saved anywhere digitally and only exists in my head is going to be more secure then passkeys due to the nature of how they work.
For context, we run a YC-backed passwordless company, and have rolled passwordless out at major organizations. While I think passkeys will definitely be the answer for consumer passwordless, I'm not sure this is quite reflected in the enterprise yet.
Passkeys are wonderful for consumer use, because they're meant to enable your own ability to break glass, by backing up the credential to other devices. You can do this via iCloud (by default) or via things like Airdrop.
Technically, the devices you share this credential to, cannot provide "attestation" - attestation is the "proof" that the keypair was created by a specific device (like a Yubikey, Apple Machine, etc). Manufacturers (like Yubico) ship a keypair / certificate onboard your key, that can't be extracted. There are no external methods to interface with this keypair - granting admins high confidence this is a real Yubikey.
You can see where this starts to become a problem without attestation, and the ability to share the keys. Enterprises are not willing to inherit the risk of an airdroppable credential exposing access to a privileged employees' account. There is a non-risk of digital theft when it comes to a Yubikey.
Ultimately - passkeys can't even be used to unlock your machines, or servers. FIDO2 (more importantly, OS developers) have a long way to go before we're done with passwords for good.
Today, Yubikeys are filling this gap for most of the enterprise market, some of whom have spent multiple millions of dollars on hardware. Passkeys in their current state are going to be a hard sell.
>Passkeys are wonderful for consumer use, because they're meant to enable your own ability to break glass, by backing up the credential to other devices. You can do this via iCloud (by default) or via things like Airdrop.
> Technically, the devices you share this credential to, cannot provide "attestation" - attestation is the "proof" that the keypair was created by a specific device (like a Yubikey, Apple Machine, etc). Manufacturers (like Yubico) ship a keypair / certificate onboard your key, that can't be extracted. There are no external methods to interface with this keypair - granting admins high confidence this is a real Yubikey.
Passkey is not a technical term, but an experience term. So it somewhat falls apart when you use it for technical arguments.
Apple supports passkeys. Android and Chrome support passkeys. Microsoft supports passkeys. Yubikeys supports passkeys.
But the authentication process for user verification and the capabilities/restrictions around things like cloneability may differ wildly.
A government agency may choose to support passkeys, but only when provided by a FIPS-certified authenticator which meets AAL2 requirements. Those won't come from Apple or Google, at least not today.
An enterprise may choose to support passkeys that are generated via software/configuration provided by their MDM management product. Apple announced beta support for this.
However, if you are doing government-to-citizen you may experience a lot of pain trying to mandate those particular hardware authenticators. It will be painful to convince citizens to spend $80+ USD on hardware. It will also be painful because web technologies are built around user choice, and WebAuthn API and user experience are unlikely to ever be optimized to help restrict user choice.
>Passkeys are wonderful for consumer use, because they're meant to enable your own ability to break glass, by backing up the credential to other devices. You can do this via iCloud (by default) or via things like Airdrop.
What happens when Google or Apple decided to ban you for mistake due to anti-bot or anti-fraud detection or whatever ever nonsense?
In my opinion, this is a terribly shitty solution from a security point of view.
A dream for government and companies like Apple an Google to control your digital life.
With passwords, you are like "stateless", you can a private email account, or any account and pass a border, ... without anyone knowing that you have an account, forcing you to give access, extracting the access element from a hardware, or locking you out because you lost access to the device.
Even if master keys and co would be stored on TPM, secure elements, ... it is just a matter of years and compute powers because some gov can access it.
Most of the time it already proves that you have a credential for an account.
Manufacturer of your machine or OS, can easily be forced by authorities to give access to the secure part. Willingly or not.
I think the author is taking the argument too far and also not being precise in their language which is the exactly the "passkey hype" they fault Apple and "Fido" for doing. Maybe just cut the exposition out of the article I don't think it helps the argument and makes the author sound angsty rather than contribute to the well thought out considerations towards the end. Point being, I think there's a point but it was hard to get there.
Anyway:
1. Passkeys, to me, are the private key. It doesn't matter whether it's resident or not, or whether it's device bound (non-extractable) or not, whether a "genuine" authenticator made the signature or not, or whether user presence was verified or not. A Passkey is not the authenticator/library as the author claims, and it's not the protocol or some set of protocol features.
2. The world is better off if everyone uses WebAuthN instead of passwords irrespective of how the passkey is stored. Full stop. So let's start there. Additionally, where I diverge from the author, I don't think preserving the sanctity of decade old hardware keys which only conform to older versions of a TPM spec is of paramount concern, either. The author's fixation on that is a little strong, but it's understandable.
3. I don't think you need to discourage resident keys. But I also don't think RPs need to care about whether the key is resident or not. Let the library on the user's browser/device decide how to find the key. An RP wanting to verify user presence is one thing, but saying this key must be stored with the user IMO is a step too far. It's likely that RPs don't even care and are just avoiding wanting to store some extra bytes in their DB. Or their security team overly cares and is making up reasons why the RP needs to require resident keys (IMO bad security take all things considered) but I can see the tin-foil angle).
So I think the simple solution is probably to, for WebAuthN specifically, deprecate the ability for the RP to specify that it needs a resident key. Problem solved.
Oh and while we're at it, forbid hardware attestation. The web doesn't need that. If rk=required and hardware attestation need to exist for tightly controlled enterprise use cases then whatever, but relegate them there preferably in some non-required protocol extension.
I'm not clear on what you mean by this. Do you mean hardware-backed non-extractable resident keys? Or the more simple idea that your WebAuthN agent can store a key?
I don't see a problem with a WebAuthN agent storing a key (ideally locally encrypted at rest with a hardware resident key from a user or device TPM). Having users have a passkey database that they sync across devices is not really a problem as far as I can tell. Do you feel like that's a problem?
It's suboptimal: it basically creates the same situation as password managers where compromising that database is game over. It's a much better situation if instead you enroll multiple different keys. The main issue is if you want to automate this you need a standard way to enroll one device on all sites another device is enrolled in, which AFAIK doesn't exist. (you'd also want to have an automated way to revoke another device in the case that it is compromised).
Having multiple keys enrolled would also allow for better recovery from websites in the case of a suspected compromised device: they can simply disable one key, allowing another key to still log in (and either vouch for or disable the other). You could also have flows where certain actions require multiple keys for authentication.
> it basically creates the same situation as password managers where compromising that database is game over.
So your solution is to split the DB up and store it encrypted, using the same key, on each services servers? I'm dubious that does anything for your case (not to be confused with me agreeing that it's totally okay to have non-resident keys).
You can only compromise the encrypted passkey DB if you compromise the hardware key, or by brute force. If the DB is encrypted at rest using a hardware key, the security model is essentially isomorphic to that of storing encrypted keys on a server. You're just playing with where the key sits at rest. It's still ultimately encrypted by a device's hardware resident key (assuming a sane "soft" WebAuthN implementation by the PW manager).
Unless I misunderstand you, I think you're letting the perfect be the enemy of the good.
EDIT: I think I misunderstood you. It appears you're arguing for resident keys. The person I'm responding to is arguing against resident keys (and I'm asking why they think it's a security mistake) so your response doesn't really make sense.
I understand that technically in a raw security sense it's better for a user to enroll multiple devices with HW resident keys that never leave the authenticator/TPM hardware.
The argument these days is more about what's an acceptable compromise that will get people to actually use Passkeys, because users carrying HW keys around is obviously a failed solution.
Encrypting a soft DB of Passkeys at rest with a user-bound key, and encrypting that user-bound key at rest with a device-bound resident key, and syncing that DB and user-bound key between devices seems like an acceptable compromise that's effectively isomorphic to resident keys everywhere.
No, the alternative being argued for (by OP primarily, but I understand his point) is to only have master keys on devices which can't be moved between them, and enroll seperate devices (and I think this really needs thought from a standardisation point of view). Resident keys are a mistake in that they can be moved between devices. If you allow that then you basically just have a password vault, just maybe with a slightly better lock on it. It's a heck of a lot better than the status quo but it's not the best option.
> Obsession with passkeys are about to turn your security keys (yubikeys, feitian, nitrokeys, ...) into obsolete and useless junk.
I'm sad about that too but I'm finding some relief in the fact that non cloneable / non shareable physical security keys aren't going to be totally obsolete though: they're supported by OpenSSH and shall probably be so for a very long time. Even Google teaming up with Microsoft teaming up with Apple cannot fuck that one up in the near future.
Not that it's much of a consolation.
But yup, it's really sad how a conglomerate of the biggest actors managed to fuck up security keys while riding on the security benefits non-cloneable keys do bring:
"It's physical non cloneable non shareable security keys but better because... More convenient for they're actually not physical and actually cloneable and actually shareable".
I saw it here on HN too on the various threads on passkeys: "It's better because it's more convenient".
I love this writeup. It explains a lot I didn't know about passkeys in a way that's easy to understand.
As a total self host enthusiast with a prime interest in hardware tokens (I've been using smart cards to log in for decades) this is really a bad thing. I don't want to be dependent on Google or Apple or Microsoft. Absolutely not.
There's also an extra factor the article doesn't mention: because passkeys are synced there's no need for the website to offer to enroll more than one key which makes using multiple keys difficult (needed for backup purposes)
It finally explains how I can do usernameless work with M365 though on my Yubikeys.
I hope a centralized solution comes out which allows us to use hardware backed tokens but still sync the resident keys using our own servers somehow. I know bitwarden is working on something but that's not good enough, it's not really token backed. It uses a master passphrase which is a big step back IMO.
The difference between zero and one factor is infinite. The difference between one and two factors is huge. The difference between two and three is almost negligible. Except in Citrix-like situations where you're sharing the actual reader hardware, passkeys are only useful for a device you have physical access to, and that's already one factor. So in that sense they can replace passwords, I guess, but they seem like a strictly worse solution.
Is this really how resident keys work? I thought the common use of all secure enclaves is to save keys outside the enclave, but symmetrically encrypted with a master key in the enclave, bypassing the memory limits. I can see the downside for yubikeys/usb enclaves being the keys are then no longer accessible, but for embedded enclaves this should never be a problem.
I don't want to store passkeys in my password manager, the same way that I don't want my TOTPs to be stored in my password manager.
If my 1P/LastPass/BitWarden gets hacked/compromised/pwned by someone across the globe, they still can't compromise my critical services because they don't have my hardware token. I just have to rotate all of my passwords.
If you store everything in your password manager, you've just turned your 2FA/MFA into 1FA.
This is also why you shouldn't copy SSH private keys around, just because "it's easier to only have one fingerprint". Generate one private key per device. This is somewhat mitigated by `-sk` type keys, though. (SK SSH keys are still basically unusable because they are not recognised by a significant amount of versions of SSH, including the default MacOS SSH client).
The passkey security model is designed with the assumption of the passkey ties to a device. Using a password manager that's tied to a centralized service that's accessible from any web browser with an internet connection makes the security model different. It seems to me like a passkey on a password manager is no different than a username and password with NO 2FA security model.
The whole idea of Passkey is that the credentials are syncable. The main implementations of passkey are probably going to be Platforms (Google/Apple/Microsoft) and Password Managers. In both cases the credentials will be syncable and tied to a centralized service.
The main difference with passwords is that passkey are not phishable (since you never send them to the website you authenticate to)
Oh and for SSH, SSH CA and short lived SSH certificate is the only right way ^^ (I recommend Hashicorp Vault for this purpose. It also works for the host key)
> Now, the primary difference here is that resident/discoverable keys consume space on the security key to store them since they need to persist - there is no credential id to rely on to decrypt with our master key!
The article is glossing over the biggest drawback of non-resident keys: If you lose your security key, you lose your master key, and you can't decrypt anymore the credentials sent by the relying parties. To mitigate this, you need to register at least two security keys, and stored them in different locations. But wait, how can you register both keys in a new service, while keeping them in different locations?... I don't have much experience with those keys: am I missing or misunderstanding something here?
I wonder if there could be a middle-ground software solution here?
E.g. A piece of software (like a passkey manager or keychain service) that transparently simulates a resident key store by using an encrypted database that resolves services to credential IDs which are then forwarded and unlocked by a non-resident hardware key. One could then conceivably still sync the database around (using whatever services or method you want), and even if the encryption of the database were somehow broken, it wouldn't be the end of the world, as the actual signing is still done by the hardware key.
(Disclaimer: I don't know enough about the actual protocols to judge if the above is actually technically feasible, but would be curious if it is)
We absolutely need to allow soft implementations to exist. The platform providers are already doing this. You should be able to use your password manager as a passkey manager. The RP shouldn't dictate any of this and the protocol should actively resist platforms locking people in to (their) blessed implementations.
It's all about the FIDO2 hardware attestation. I'd rather use a FIDO2 authenticator with attestation. Call it a passkey or not, I don't want the to use the syncable passkeys without hardware attestation.
Apple doesn't do attestation so if you require that you're already leaving out the biggest platform.
But it's a bad thing for self hosters anyway. Because parties will make exclusive deals or only wish to deal with authenticators they trust (eg that pay them for 'certification')
At least for the enterprise - this decision should be up to the company. (i.e, flip a switch on your identity provider to enable or disable support for "no attestation")
Some companies are comfortable with the idea of a two-factor method that can be airdropped to friends. Major organizations (AWS, among others) are not huge fans of passkeys for enterprise use. When passkeys released, our initial response at AWS was to give organization admins the ability to disallow passkeys.
Overall, I think there are fixes coming across the board from Apple and the FIDO Alliance to address some of the early shortfalls of passkeys.
Monopolies were shitting, are shitting and will be shitting on community standards. There will be no end to this until a radical change in the societal mindset.
This is finally a great article demystifying non-resident keys! I got a FIDO yubikey a few months back, it came with no PIN set up, I was surprised that I could just use it like that, websites were not asking me to set it up explicitly.
I set it up via CLI. Then all websites started asking for the pin and storing the keys inside (resident keys).
Knowing that you only have limited slots, I tried deleting the pin by resetting yubikey to revert back to old behavior as there was no way of zeroizing the pin any other way (for the sake experimenting with the tech and understanding key management better, ofc pin or BIO auth is better).
And then suddenly found that I was unable to use the key to auth into my accounts I set up before. In hindsight now I know by resetting the hardware key I have reset the master key inside.
There really isn’t much well written introductions into how all of this works. Thank you for doing a great job demystifying the flow here!
Am I the only person who finds it unreasonable that security keys only have enough storage for 5-10 resident keys? Why can't I store millions of keys? What's that, 50 MB? Why isn't that standard?
Given that passkeys are tied to a hardware ID does that mean people will eventually stop using VPN's, given they can potentially be de-cloaked by anything that can talk to their TPM?
I am factoring friction and laziness into this. Just about everyone excluding apple hardware has a TPM and it's already paid for. Not everyone has iCloud, Bitwarden or other tools. So there is what people can do vs what people will do and I think they will be guided if not now, then eventually into using the TPM since it is already present. People tend to follow the least path of clicks. Windows for example automatically takes ownership of the TPM by default.
People on Apple hardware will probably use iCloud which may be tied to their real identity. This creates a lot more questions for me but that is probably best saved for it's own thread.
> A frequent question here is if non resident keys are less secure than resident ones. Credential ID's as key wrapped keys are secure since they are encrypted with aes128 and hmaced.
This is incorrect. The strategy for how handles represent a public/private keypair is security-key specific. For example, Yubikeys shipped before firmware 4.4 used a different algorithm, and Solo keys use a third.
Platforms may also ignore requests for non-resident credentials, and return a handle reference to a resident one instead.
Until it's possible to physically link a passkey to a device like a yubikey I see no security "benefit" to giving single permissioned access to credentials to large tech companies. It's a single point of failure even worse than trusting a password management company like OnePassword etc.
I'm still holding out for a self hosted or federated version of passkeys, potentially similar to how more complex forms of crypto custody exist - similar to multi party computing (MPC).
I think that 99% will use something they already have like an iphone, an Android phone, touch id on their mac books, or windows hello. Storage should not be a problem.
> The problem is that security keys with their finite storage and lack of credential management will fill up rapidly. In my password manager I have more than 150 stored passwords. If all of these were to become resident keys I would need to buy at least 5 yubikeys to store all the accounts
How it is possible that THIS is the problem in 2023? Storage is cheap, tiny, and capacious. I feel like I’m reading an article from 1992.
Isn't the physical key a whole computing device with high grade storage ? I can't imagine those using cheap consumer grade off the shelf storage parts.
Enough that it's an issue when you want to store them in a highly secure way ?
I doubt that the yubico limited the number of useable keys and space available per protocol by sheer spite or trying to upsell larger "pro" versions that they never made. Using more space for encryption and other mechanism is probably a part of this, then it depends on how they allocate space (segmenting by category of data would totally be plausible for instance)
You can't just drop in a generic storage IC which are cheap and plentiful.
You need one with a certain degree of verified _good_ encryption mechanisms built in. You do not want it ever communicating with the processor in plain text if you are selling a security device.
I'm honestly still looking or a one-sheet on why I should care about passkeys, and under what circumstances I should adopt them.
I get it. It's a complex topic. When I was younger I would've jumped in and read about it obsessively, but I no longer have the time or motivation to do so.
Dumb question, but how much more would a YubiKey or something cost if they added a marginal amount of storage so it could hold an almost infinite amount of these keys? I’d imagine something like 32 Mb would allow you to already store 1000’s of keys.
Why not simply allow users to specify any public ssh key as an authentication factor? And create a UI around that? Why do we need to create more and more new security crap that no sane person understands?
All I care about is that the keys are ssh keys and the protocol is ssh auth. Then do with that what you will. Store the keys in the cloud if you must. When a user creates an account on a site the browser gives the user a choice to either select an existing identity or create a new one. All very straight forward. You don't have to mention anything about ssh, RSA or ssh. Nobody is forced to learn what ssh means or how it works.
I've never used one so have no idea, but do these security keys really have so little available storage space that they can only hold a few hundred passkeys, and if so, why?
Yeah there's a cost. The cost is on the consumer who's being sold bullet proof vests to goto the supermarket because there's fear being created about random shootings that could happen and than you need these high tech security things to keep you safe.
This type of security e-wang crap is only suitable for highly sensitive confidential data (secret / top secret level stuff) and most consumers get little benefit from it.
Why? Because you've already handed over your data to a lot of places knowingly and unknowingly who are more likely to leak it than you ever will.
Everything comes at a cost. If a company has breaching issues to leak passkeys they will have bigger troubles. As many suggest the main problem this attempt of FIDO alliance tries to solve is "Passwordless accounts". Trusting passwords to password managers is a problem at raise. I guess this solution would mitigate password mess for next 10 years. Then by moving forward I believe tech sector would also be able to satisfy security guru's too, hopefully.
A private key for curve p256 is 32-bytes. Let's say we have associated metadata (hostname, whatever) and round that up to 1KiB per key.
A typical user has around 200 accounts but let's give room for 1000 since powerusers love hardware keys.
That's 1000 x 1KiB = 1MiB. This is totally within our technical capabilities. It's not uncommon for small radio coprocessors to have more storage on die. Even old school SIM cards have 256KiB worth.
No I'm buying a USB minimum storage device with a micro controller embedded and potted under some very hard plastics. Very much the same thing. Function different yes, manufacturing when it comes down to it. Exactly the same. I could print wafer for your security key, I could print wafer for your flash memory. IC's arent manufactured differently in security keys to normal IC's. The product is the same silicon just doped differently to make a different ic/circuit. its a small computer in a USB. it does a limited function. Stop making them out like they are some wizard stick fancy stuff. You can setup a ESP32 as a security key if you want.
> Stop making them out like they are some wizard stick fancy stuff
But they are. Tamper resistance is a thing, and it's different from the engineering perspective. That's why Yubikey and FST-01 are entirely different beasts.
Most folks probably don't need tamper resistant hardware, though. I mean, they've been doing fine with sticky notes on a monitor...
Most folks are better off with notebook in the table next to their bed/desk for passwords than anything else. Whens the last time you got broken into at home and someone stole your diary? Whens the last time you read about someone getting breached because they had their passwords written down in a book next to their desk? Pretty much never.
Whens the last time someone got breached storing their PW somewhere digital? well shit probably a dozen happening every second and a few dozen breaches somewhere in the world before your done reading this.
HTTPS only like this blackhats.net.au site comes at a cost too. If there's a browser/server SSL mismatch the text becomes completely unavailable. While if it was an HTTP+HTTPS site I could simply visit the HTTP endpoint. Instead to protect against hypothetical downgrade attacks they've made their content inaccessible and effectively DoS themselves for a small fraction of visitors.
This site won't work on Windows 7 / Chrome 69 as it only supports TLS 1.3 [1]. I believe 5% of the web can't connect [2].
But the text on the site is for technically minded people and the content includes commands you should run and security configuration. Tampering of the content could be quite harmful.
Tampering with the contents is quite unlikely. And anyone visiting a security site as a technically minded person probably has javascript disabled initially.
Requiring HTTPS only for this is like requiring people wear bulletproof vests to visit your backyard BBQ. There is no doubt they are "safer". But it's also pretty silly.
No, it was shockingly common for ISPs and public WiFi to modify sites. And many did inject malicious scripts or redirect users to malicious sites in order to monetize.
this is probably a dumb question, but why not just store a secret seed that is used with an on device prng to generate as many secrets as you need where a sequence id gets shared with the counterparty?
I'm pretty sure that is how non-resident keys work on some platforms (like the first gen yubico U2F tokens).
The main feature of resident (aka discoverable) keys is that the RP doesn't need to know anything about which key is about to be used, so it can just say "send me an auth for example.com", and the browser and key handle the rest.
However, with non-discoverable keys, the RP has to provide a reference to the key, which could actually have encrypted private key matter in it.
It's a challenge-response with nonces. There is also the browser's role to ensure that a given RP's requests are marked with the origin (domain) they came from, so auth.example.com and auth.example.evil don't overlap. (U2F is mostly concerned about malicious sites, and less about malicious browsers and other nastyware)
I understand that this person is not happy that a feature that is designed to make billions of users safer, might make life annoying for the handful of people who are using a hardware security key when they use that feature :)
Gonna only say this once. Stop building your forts with only one wall and one gate. Build many walls to cross, many gates to open, observe the user through each of these.
lol op assumes passkeys or pw's are the only lock being used to protect things. Well from a security implementation standpoint...I assume someone either you on the rust end or someone on the yubikey end is already a weak link and your password is probably already compromised. But thats ok.
TBH from a security standpoint, yeah I expect your PW to be correct, but I also do assume that its not secret. Its only part of the parcel. I expect about a dozen other metrics to be correct too pending on how secure you need your stuff or how important the security is. If you don't tick most of these if not all of these boxes. I don't care if your password or passkey is right. Your not getting in.
The pincode>push button on yubikeys is part of this. Your IP, your device ID, your TPM trusted data paths, the time of day your trying to make access, the frequency of it, the country of origin, the target your trying to get into, the wifi you are accessing this via....are all part of this. Stop being so old school about security and propping it up off one point of failure.
Now this bit is going to be the real hard biscuit to bite for alot of folks, but Yes I get that its harder in web because you probably don't have the physical end of things under enough control that you can use those for your security checks/metrics as they are under user control, but maybe don't store super piss off secret data that needs to stay secret in systems like that. If your web app gets to X level of personal data/could be involved in X level of harm to society or its users if breached. Don't let people sign up without mfa, hardware keys and so on. Force users to detail more info about their fixed locations and regular usage areas and judge their access security on that.
tldr I dont care if your PW is compromised its 1/X keys needed. I assume its compromised. I dont assume all other X keys are tho.
> Gonna only say this once. Stop building your forts with only one wall and one gate. Build many walls to cross, many gates to open, observe the user through each of these.
Obviously that's better but if you make the user jump through too many hoops they're just going to pick someone else to do business with.
This is why Fido is a good idea, it's not only more secure but also easier. In the security world that's kinda like magic, usually you're end up trading one for the other.
They really just can’t get out of their own way in screwing this stuff up…
Resident keys allow the browser to query the secure element for a list of usernames for a domain. It’s a nice feature, but you have to setup the protocol to reliably fallback to non-resident keys on secure elements that are space constrained.
But then they came up with these resident key “preferred” and “discouraged” keywords which are sent by the site?! And then the various clients all interpret them differently so there’s no obvious way for a site to let clients with secure elements with practically unlimited storage to opt-in to resident keys while limited storage secure elements stick with the wrapped/derived key.
The current situation is if you send “preferred” you’ll fill up limited space YubiKeys, and if you send “discouraged” then Androids with practically unlimited storage won’t use resident keys.
Clients should decide based on their capabilities and should always opt for residency if they have unlimited storage. The protocol should have been defined with just a boolean ‘required’ field for residency, which would only be used in highly unusual circumstances (which I’m not sure what those are — what’s the justification for a site requiring residency?)
What I don't get is why can't the browser just maintain a list of accounts that it knows work with the non-resident key? The username is not secret information, so why does it need to be held in the key?
The answer provided by the author (TLDR: you'd have to break AES) is far from satisfying to me.
With a resident key, the only thing an attacker on an endpoint could ever get is a challenge & a response to that challenge. It's more than nothing, but limits a lot of attacks.
With non-resident keys, an attacker can not only do all kinds of offline attacks* against the HMAC and crypto, but also has a far better position of attack against the the security key: you've got decryption, HMAC, key parsing code all happening on untrusted data (if done correct, HMAC will have to fall first).
Further, you've got twice the encryption happening on the key, which could provide a larger attack surface for side channel attacks.
I'm not saying any of these are trivial or even feasible, just that "citation needed" for resident key == non-resident key, in terms of security.
*if your idea of an offline attack of this nature is "break AES with a brute force" and that's about it, there's a lot more options
> since a fingerprint or a retina scan makes a fine good two factor as well
Last year I was making new IDs for myself and my mother which in the newest version require fingerprints being provided beside the biometric photo. The office clerk lady who served us had a real struggle with taking scans of our fingerprints. Our skin was damaged due to low temperature and extensive use of alcohol-based sanitizing gel because of the current situation back then. She gave us vaseline cream so this would go a little bit easier but no luck - each of us spent around 10 minutes scanning each finger till it got finally digitalized.
The scanner used could a really cheap one and thus blamed for prolonging this whole scanning process or it was our skin condition or both these things. Whether it was, by this experience even if episodic one, I don't think that fingerprints are "a fine two factor".
As for eyes: it's really unlikely it might happen to many people but eyes can be damaged and there are eye diseases that might affect verification. Not mention post-procedures treatment that excludes use of this scan technique for some time.
You can go, "here is the passkey to get into all my accounts" which would imply they don't lose the key and its not stolen. Or you can register their bio metrics into your whitelist and not have to deal with that.
Someone gains access to a Passkey. You generate a new Passkey and replace the registered Passkey for the server. Someone gains access to your fingerprints or retina scan... Game over?
In 2014 the fingerprints of Angela Merkel and the German Defense Minister were "cloned" by taking high resolution pictures at a press conference. Against targeted attacks, using something that's readily visible on your person isn't the best idea.
biometrics aren't passwords. They are only somewhat useful as authentication if you can verify that they are attached to the person you are authenticating, which generally only works in person.
It turns out when it said “passkey sent to android” the android never got any notification and I couldn’t figure it out after half an hour. You can’t even delete the auto registered passkeys. Nor turn off the default auth flow.
Terrible UX by Google. I’m assuming it’s because her phone is some budget Samsung with a bastardized Android. Trusting those devices on a mass scale to run your auth system was dumb.