Garmin's hardware, including the hardware of their smartwatches, are very tempting. They are designed to be easy to use even with gloves, have good battery life, and on some high end they have solar and/or 40-meter scuba diving rated.
If there's a way to use Garmin's smartwatches without using their cloud I probably would consider that. But since their ransomware attack from 2020, I really can no longer trust their cloud any more, especially that the data collected from a smartwatch is on the more sensitive side. The only Garmin hardware I'm still using is their bicycle tail light+radar, which I just use with wahoo's bike computer instead of other Garmin products.
I had a facebook account from the early days. I deleted my facebook account at 2018-ish. I never had an instagram account.
Recently I got an email from instagram saying it's "easier to get back to instagram", with my usual username. I can't check what's on that instagram user because they don't show you anything without logging in, so I asked my wife to check that instagram user for me. It doesn't have any photos nor profile photo or following, but it does have several followers that's my facebook friends (when I had the account), so at some point meta created that instagram account for me and associated it with my facebook account, I guess? I hope that account was not "AI-powered".
>exposing yourself to the mercy of a single organization
The nice thing about passkey is that unlike password, you can have multiple per account.
So you can register a passkey from 1password to website A, and also register a passkey from Apple keychain to website A, and also register a passkey from Google account to website A, and also register a passkey from yubikey to website A, so even if you are locked out from one of your accounts, you still have several other ways to log into your account at website A.
And _if_ your, say, Apple keychain is compromised, you can just revoke the passkeys from your Apple keychain from all the websites (yes it's tedious, but it's doable).
A door can be kicked in, a safe can be drilled, a password can be reset. But these keys (whether a phone or a Yubikey) to your digital life are irreplaceable if they're all lost. We've never been in this situation before.
The problem with any solution relying on a couple of physical devices as the sole access to your digital life is that the management and protection of those objects becomes one of the most important things in your life. These keys are supposed to give perfect security so by design making "software" copies brings that security to the level of passwords. But losing them kills your digital life.
You have two keys in the house and you have a fire or severe natural disaster? There's no reset for them and you just piled a tragedy on top of another. You want to restore them from a backup? You probably need the keys to begin with. People need one on them at all times, one at home, one or two in some other safe far away location but to still trust that they won't be misused there.
That's all people hear when they look into passkeys. "One more key" is not enough for most people, tech savvy or not.
Even if it's possible technically, I don't think it's very practical, as UX is very heavily directed towards a single passkey provider. I can imagine doing this for one or two most important websites, but not for each of dozens (hundreds?) websites users have registeration on.
It's not actually all that bad. I went through today and added passkeys for all the sites I use that support them, and for most it went like this.
1. I login to the site using my password, supplied by my password manager (1Password).
2. I go to the site's security settings and find their passkey settings. I invoke their "add a passkey" function.
3. If I'm on my Mac, using Chrome, Firefox, or Safari, I get a dialog showing me the site and the user name and asking if I want to save a passkey in my 1Password.
There is a security key icon on the dialog that I can click if I want to save the passkey elsewhere. That replaces the 1Password dialog with one offering to save a passkey in my iCloud keychain for use on all my Apple devices.
That dialog has an "other options" link which brings up another dialog that adds options to use an external security key or to save a passkey on an iPhone, iPad, or Android device with a camera. The latter option will show a QR code that can be scanned on that other device.
I save the passkey in either 1Password or my iCloud keychain.
If I'm on my iPad using Safari it is similar, except the first dialog shows both 1Password and iCloud as storage destinations, with radio buttons to pick between them.
4. Repeat step #3 once, storing a passkey in whichever of 1Password and iCloud keychain that I didn't pick the first time through.
Some sites let you give the passkeys names to make them easier to remember so there might be typing a name in there somewhere.
All in all, it is only a few seconds to add a passkey after pressing the "add a passkey" button on a site, so adding two is no big deal.
I currently have over 700 credentials in 1Password. Consider me not interested for anything that takes any decent amount of time.
I really like the idea of passkeys but I think most people forgot that security and convenience are not working well together, and passkeys attempt to solve this problem.
Passwords have their own issues but they are so easy to transport to multiple stores, meaning loosing access is going to be hard(er).
And as long as there's going to be a single-point-of-failure (being it Apple, Google, 1Password or whoever stores your passkeys) without any _easy_ way to retrieve your passkeys again I'm advising against it.
With passwords, I don't care loosing access to my iCloud/1Password/whatever. A somewhat recent list of all passwords are stored in a safe place, printed out on paper. AFAIK this isn't easily doable with passkeys.
You can still benefit from adding passkeys to some sites. It will often be a little faster and/or fewer clicks than using a password, especially at sites where you have to enter a TOTP code.
For example at Github signing in with a passkey from the sign in page is two clicks. Click on "Sign in with a passkey", a dialog pops up from the browser showing the passkey it will use by default, and I click "Sign in".
With a password it is a click to have 1Password fill my email and password, a click to submit (which could be eliminated if I had autosubmit enabled in 1Password), and then it asks for a TOTP code. After the code is entered it is a click to complete the sign in.
Github's TOTP entry form is well coded, so if 1Password has the TOTP key it will automatically fill it. If you don't keep TOTP keys in 1Password you'll have to open your authenticator app and copy the code.
Considering that it only takes a few seconds to add a passkey for Github to 1Password, you'll make up the time that takes after just a few logins.
I'm not sure what UX you are talking about, the majority of the websites supporting u2f/passkey have UX to manage your u2f keys/passkeys. (the only exception I can think of is early Twitter when it first implemented u2f, and at that point it only allow you to add a single u2f key, but even Twitter fixed that later and supports multiple keys now).
And (this is probably not emphasized enough) you really should never only use a single u2f key/passkey for a website, that's the recipe to get you locked out when you can't find your u2f key/get locked out of the provider of your passkey. I have at least 2 yubikeys on my keychain all the time (one for usb-a and one for usb-c), plus one for each of my computers, and passkeys from 1password, google, etc.. And whenever I add u2f keys/passkeys to a website I add all/most of them.
...and you just described why this is not ready for prime time. Managing a number of physical devices tied to completely opaque secrets stored by unclear providers in places you never see, with hidden agendas promoting their locked-in solution over all others and complicating everything out of one ecosystem.
Most standard users will either mess up royally or run away scared. Damn, I've been on this field for 30 years, I've been using 4 OSs, 5 different browsers and devices from every ecosystem, and I still find this whole thing too much of a hassle.
And yes, I do have a backup passkey. Even though I had to convince my skip-level that it made sense. I just find it all too complex to adopt it broadly.
if I am reading right, any time you set up passkeys on a web site, you add half-a-dozen passkeys from various services? Yeah, this sounds totally impractical to me.
Have you considered stopping using passkeys and using strong passwords stored in password manager instead? You will have approximately same level of security:
- Either way, if one site is compromised other sites are not affected (because password managers have site-per-password)
- Either way, you will be phishing-protected (because password managers autofill based on host name, and you are smart enough not to override it)
- Either way, it'll be game over if you get a malware on your computer (because it will steal your passkey out of 1password)
... but your UX for new website would be dramatically simpler.
It's not much of a hassle. I'll add at least two when I want to start using a passkeys for a site. So maybe add a passkeys to the phone and to my keychain device. Then, next time I use the service on my laptop, I'll sign in with either my phone or keyring (whatever happens to be closer) and make one there. Then next time I want to use the service on my desktop I'll sign in with whatever I've got nearby and add my desktop and the token in my desk drawer. And maybe my password manager also has a passkeys, added somewhere in there.
It's not like every time I sign up for a new site I have to drop everything right at that instant and go add a passkey to every single device I own.
> And _if_ your, say, Apple keychain is compromised, you can just revoke the passkeys from your Apple keychain from all the websites (yes it's tedious, but it's doable).
Without a standard automatable way of doing this, it doesn't happen in practice, even assuming people implement it competently enough to allow multiple passcodes (TOTP codes, for example, are often only one per account, which is similarly annoying for maintaining a revocable backup)
> The nice thing about passkey is that unlike password, you can have multiple per account.
I would charitably estimate that of the sites currently supporting Passkey, the ones that support multiple passkeys are in the single digit percentage. So, practically, you can't.
As someone who actually uses them in a lot of places, the number of sites I know that only allows a single passkey is one: PayPal. What other sites do you know only allow a single one?
PayPal allows multiple passkeys. I just added passkeys there today and had no trouble making two.
I have a vague recollection of running into some trouble when I tried to add passkeys to an account in their sandbox (sandbox.paypal.com) but don't remember what it was. I realized I don't need any of my sandbox accounts any more and deleted them all from my password manager rather than try to solve the problems. :-)
PayPal's handling of multiple passkeys is kind of annoying. I didn't see any way to change the default labels it gives them, which appear to be simply the OS and browser names. So both of my passkeys are labeled "macOS Chrome".
Clicking on them tells when they were created but the resolution is only to the day.
If somehow my private key for one of the them leaked and I wanted to delete the public key I wouldn't know which to delete.
Some sites label the passkeys "1Password" and something like "Apple iCloud" which works a lot better.
I too first experimented with passkeys about a year ago, and then largely ignored them until this discussion prompted me to have another go.
A year ago I found them almost unusable. I had trouble getting browsers to recognize a passkey was available for a site, and I had trouble getting browsers on my Mac to use passkeys from the Apple keychain on my Mac. They would often put up a QR code for me to scan on my phone to use the passkey from the phone--and that was also not very reliable.
Now I'm hitting almost no problems.
There are still some settings annoyance with some sites. At PayPal for instance it asks for my TOTP code even when logging in with a passkey. There is no option to turn off TOTP for passkeys but leave it on for passwords.
A few years ago we added `GOAMD64=v3` [1] to how we build our go binary into docker images, as all our production servers have support to that and that can give us some free performance boost.
Then it turned out Rosetta does not support that, so those docker images can no longer run on developers' mac laptops, so we have to revert it back. This is why we can't have nice things (we don't use any arm server so we never bothered to build multi-platform docker image).
From the reference: "GOAMD64=v3: all v2 instructions, plus AVX, AVX2, BMI1, BMI2, F16C, FMA, LZCNT, MOVBE, OSXSAVE."
Were those supposed to have been included? What of those is not emulated by Rosetta?
I'm struggling to understand the chain of following how this results in us not having nice things?
Iiuc, paraphrasing, sounds like go made an assumption about user requirements, that turned out to not be true when the arm macs came out? Wouldn't the arm mac users of go prefer to have docker images that dont need to be emulated anyhow?
amd64 v3 are instructions included in CPUs starting from 2013, so basically any modern amd64 cpu supports all of them. As mentioned there on the Wikipedia page, QEMU 7.2 also implemented all of them. Those are more efficient instructions, thus "free performance boost".
But Rosetta doesn't. Which instruction(s) it doesn't implement doesn't really matter (I can't remember either). What matters is that when it runs an instruction it doesn't implement, it will throw an illegal instruction error and the code hard crashes.
So because of Rosetta, we can't build code with amd64 v3 enabled, and cannot have free performance boost (nice things).
Compare to European cities San Jose definitely is bad on bike infrastructure. But if you compare to neighbor cities like Santa Clara (low bar, I know), it's pretty ok. Sam Liccardo at least actually cares about cycling around the city, and made bike infrastructure improvements, like 3rd around downtown (yes it took some time after the change to educate our bright drivers how to make right turns there correctly, but it mostly works now).
And San Jose's Department of Transportation traffic signal unit at least has an email address I can use to tell them which traffic signal failed to detect bicycle and turn green, and they actually fix them.
I switched from reMarkable 2 to (my wife's old) Kindle Oasis to Kobo Sage, and I love Kobo Sage, despite some minor annoyances (like how this is the only thing that don't auto downscale the image in epub I've used: https://b.yuxuan.org/url2epub-downscale-images).
One thing in particular is that the physical page turning buttons are very useful. None of Amazon's new Kindles have physical page turning buttons any more from what I see in the reports.
I don't think that's even new for the big N us airlines. I noticed about 10 years ago that there were some united flight number used by not really related flights, or for both outbound and inbound for the same city pair.
but it's kind of their fault? they designed the api that way, they decided what can be done in userland and what must be done via kernel. they at least _allowed_ it to happen every time.
> they designed the api that way, they decided what can be done in userland and what must be done via kernel
They didn’t have much of a choice - it is very hard to get adequate performance with real-time filesystem filtering without doing it in kernel mode. Not aware of any other mainstream OS which succeeds at that.
And they kind of had to provide this feature, since they’ve supported it since forever (antivirus vendors were already doing it back in the days of MS-DOS and Windows 3.x/9x/Me), and there is a lot of market demand for it. It is easy for Linux to say “no” when it never has had support for it (in official kernels)
But, as the blog post points out, it sounds like CrowdStrike is doing a lot of stuff in kernel mode that could be done in user mode instead - whether due to laziness or lack of investment or lack of sophistication of their product architects
> they at least _allowed_ it to happen every time
Microsoft, in allowing third party code to be loaded into their kernel, is no different from other major OS kernels, such as Linux or Apple XNU.
Apple is (increasingly) the most restrictive about this, and a lot of people criticise them for it.
Even Linux imposes some restrictions-which kernel symbols to export (at all or as GPL-only)—although of course being open source, you can circumvent all restrictions by changing the code and recompiling
eBPF being able to crash the kernel is usually sign of a kernel bug. And it sounds like in this case it was even a bug specific to Red Hat kernels, introduced by a Red Hat patch.
That said, even if they are triggering a Red Hat kernel bug, CrowdStrike should be testing their software adequately enough to pick up that issue before customers do – and it sounds like they haven't been
That was more of a kernel bug than a crowdstrike bug. However, it's clear that they are pushing what you can do in kernel space to the limits, which is not a great sign.
The famous Tannenbaum-Torvalds debate happened all the way back in 1992. At the time, the most common microkernel was Mach, which had significant performance problems. NeXT/Apple solved them by transforming Mach into a monolithic kernel, making Mach (as XNU) one of the most popular kernels in the world today (powering iPhones, iPads, Macs, etc). But that doesn’t help Tannenbaum‘s side of the argument. And I don’t believe his own Minix did much better than Mach did.
Whereas, from what I hear, L4 and its derivatives have solved this problem in a way that Mach/Minix/etc could not. Yet still, it makes me wonder, if L4 has really solved it, why aren’t we all running L4? L4 has had some success in embedded applications (such as mobile basebands, Apple Secure Enclave); but as a general purpose operating system has never really taken off.
An application in which something like slow file IO wouldn’t be a problem - does it even have a filesystem? And we don’t know whether Intel has done things to make it an “impure” microkernel, like what NeXT/Apple did to XNU, or Microsoft did with win32k.sys
When a parking valet takes a car on a joy ride and crashes into a tree, we could blame the tree. We could blame the car owner for handing over the key. We could blame the auto manufacturer that didn't provide a "valet mode". We could blame the police for not detecting the joy ride before the crash.
All of these parties could do better (stupid tree!). But the real problem is the valet.
We can say that it is obvious that the electronics-heavy cars of today should anticipate rogue valets and build in protections. But we shouldn't let rogue valets off the hook for damages.
As a consumer, you could choose to only purchase cars that have "valet mode". So should we blame consumers who don't? If so, we should blame the airlines, hospitals, etc.--not Microsoft.
How about we prosecute valets unless they refuse to park cars that don't have "valet mode"?
> All of these parties could do better (stupid tree!). But the real problem is the valet.
No, the operating system is supposed to provide secure access to hardware and isolate independent subsystems so they can't interfere with each other. That's its whole purpose for existing. The fact that people feel they need to deploy CS is a Microsoft failure. Windows is just not a secure OS.
You’re shifting practically the entirety of the blame to a company that at best was an accomplice to the issue.
I get that you hate Microsoft, but not everything is their fault and it’s disingenuous to pretend otherwise.
> ing. The fact that people feel they need to deploy CS is a Microsoft failure.
CS is also available and widely deployed on Mac and Linux. Is that a failure of Apple and all the distros? It literally took down Debian and Red Hat systems earlier this year, is that also not CS’s fault?
> CS is also available and widely deployed on Mac and Linux. Is that a failure of Apple and all the distros
Yes. All widely deployed commodity operating systems have terrible security designs. None of them have access control systems that enable the principle least privilege, let alone encourage or prioritize it, and none of them are written in robust languages that make verification of safety or security properties possible. Microsoft has made some headway on partial verification, but it's a far cry from what's needed.
> Yes. All widely deployed commodity operating systems have terrible security designs. None of them have access control systems that enable the principle least privilege, let alone encourage or prioritize it, and none of them are written in robust languages that make verification of safety or security properties possible. Microsoft has made some headway on partial verification, but it's a far cry from what's needed.
What, exactly, is your solution then? To never use a computer again? Because that's certainly what it sounds like.
Secure, robust operating system designs have been known since the 1970s. KeyKOS, EROS, CapROS. All commodity systems instead use classic access control lists, subject to fundamentally unsolvable access control vulnerabilities. seL4 finally implemented those lessons but it's far from a commodity operating system.
Can you point to an OS that can actually be used as a general-purpose OS? Or are you going to tell us that trying to run a web browser is actually what is fundamentally wrong with technology these days?
You could also choose to park the car yourself or plan for a secondary mode of transportation if something happened to your car.
Not the best analogy. The organization who deploys said software is responsible for the uptime of their systems. They didn't have to use CrowdStrike and if they do they should have a plan in the event of failure.
Just to be clear within the analogy: are you expecting the auto manufacturers to "force-eject" any hotel on Park Ave that has a record of valet mishaps? Or did you mean individual cars should force-eject the valet?
If a Caesars Entertainment property in Macao has enough incidents, should GM update the firmware on their automobiles to force-eject valets at Caesars Entertainment properties in Las Vegas?
Now imagine that GM actually operates valet services in Macao and Las Vegas. Should they be allowed to force-eject valets from competing services?
I am not a Microsoft apologist. I think they should do better. I think Linux and FreeBSD should do better. I personally avoid Microsoft products. But I place more blame on people who use MS products than I do on MS. After all, I never intend to hand my beat up old Corolla over to a valet so why should I have to pay for a "valet mode" feature that Toyota is forced to build into all their cars? Isn't it reasonable that motorcycles, 18-passenger vans, and scooters don't need "valet mode"?
In my book, the auto manufacturer is lower on the list of culprits than the valet, "the establishment that keeps a valet with an abominable record on staff", and the vehicle owner. But some place like Car and Driver could definitely prioritize encouraging GM or Toyota to develop valet modes over berating owners; so I don't mind a place like HN shooting a few arrows at MS. Unless the general public follows their lead and lets bad guys off the hook by shifting too much focus to somebody lower on the list.
> Just to be clear within the analogy: are you expecting the auto manufacturers to "force-eject" any hotel on Park Ave that has a record of valet mishaps? Or did you mean individual cars should force-eject the valet?
Not OP, but I think the analogy here is the hotel "fore-ejecting" (firing) the valet with a history of doing joy rides. That seems very reasonable.
In the analogy, it seems Microsoft is a car manufacturer. The hotel is the company that bought software from CrowdStrike. The problem is that Microsoft should not control who has access to which APIs, that is a huge can of worms, and actually called anticompetitive by the EU from what I understand. At MS level, either they publish APIs or not. If published, anyone should be able to write software for them. This is especially bad if MS themselves also sell security software that uses the same APIs. It would literally mean MS deciding who is allowed to compete with their security software.
I think it works better (please allow me to change it) if Microsoft is the hotel. Crowdstrike is the restaurant inside the hotel. The restaurant is serving poisoned food to the guests, who assume it is a decent restaurant because it is in their hotel.
Also the restaurant has their own entrance without security and questionable people are entering regularly, and they are sneaking into the hotel rooms and stealing some items, breaking the elevator.
At the same time, the hotel is in a litigation process with the restaurants association, because in the past they did not allow any restaurant on their premises. The guests, naturally, do not care about this, since their valuables have been stolen, and they have food poisoning. The reputation of the hotel is tarnished.
I don't think this works since Microsoft isn't the hotel. The hotel in your example chooses which restaurants are inside, but Microsoft doesn't. In this example, Microsoft is the builder who built the hotel building for a 3rd party. That 3rd party decides which restaurants it wants to partner with, as well as any other rules about what goes on in the building.
If the builder came around and made changes to ban the 3rd party's restaurant partner, that would cause a ton of issues and maybe get the builder sued.
Microsoft can't decide what can and can't run on their platform - the most they can do is offer certification which can't catch everything, as we just saw with Crowdstrike since they decided to take a shortcut with how they ship updates. Microsoft also had to allow for equal API access so they don't get sued by the EU.
Operating system (hotel) decides which programs run in kernel mode (Crowdstrike) but ok. Let me address the other point.
Again the reasoning of allowing equal API access to avoid getting sued is a false dichotomy: Microsoft could choose to make an OS that would not need such mechanisms to be simply usable.
They could also remove their own crowdstrike-alike offering, so that it would not be considered anti-competitive. They could also choose not to operate in EU. Of course, that would lower their profits, which is the real motive here.
Once you sum it up the reasoning goes: hospitals/flights can stop working because a company cannot lower its profits, and said company is not to blame at all. It is clearly false, the rest is sophism, and back-bending arguments IMO.
I am conceding that point (the "but ok" part). Maybe I could have expressed it better.
Please note, that in my analogy the hotel has input in which restaurant is allowed (opposite of your scenario). There are also not infinite Crowdstrike-like offerings, only a few. Same thing applies to the hotel, yes, only limited by the surface of the building and cultural norms.
I any case, the analogy cannot please everyone, and I can see how there are some errors with it in some aspects. In others, I consider it accurate. Using an analogy is an invitation to nitpick on it, so it is my fault really, but I could not resist.
There are other points in the analogy that I feel reflect very well how ridiculous it is to claim Microsoft has no responsibility whatsoever. IMO they do have at least partial responsibility. One cannot simply excuse them "because EU".
But this implies that even the guests who never went to that restaurant and have no links whatsoever to it might somehow still be directly suffering because of its presence.
In reality this doesn’t seem to be the case at all.
I'm expecting restaurant owners to fire bad valets.
Or in Microsoft's case, via regulatory, social, or software, prevent Crowdstrike from causing harm to their customers.
I'm aware it's a sticky regulatory situation, but CS has a history of these failings and the potential damage could be severe. Despite this, no effort (that I am aware of) was made by Microsoft to inform customers that Crowdstrike introduced potential risks, nor to inform regulators, nor to remove the APIs CS depends on.
I don't believe Microsoft is solely responsible, but I do believe that throwing all of the blame for the very real harm that was caused onto CS alone is missing a piece of the puzzle.
Last aside, every large corp has team(s) focused on risk. There's approximately zero chance they didn't discuss CS at some point. The only way this would not have happened is negligence.
Microsoft was required to let them have the same access their own software used. Which seems fair to me. Microsoft can remove those APIs entirely, they just can't restrict them.
Can Microsoft legally ban a competitor for percieved incompetence? I doubt it . partiuclarly seeing how much competence is shown with windows and MS teams software
Microsoft assigns driver levels to these guys etc. and allows them to load kernel mode components as protected etc.. If they do not allow that - CS cannot cause such damages. ofcourse, as you pointed out, this will then turn into some lawsuit blaming MS for killing competitors, even if they do it to try and protect their customers.
Problem is that the establishment here is well the establishment. That is the state itself. Or at least one of them. As somehow MS is in position where for any slight anti-trust thing they will be prosecuted. Our system is setup to allow these actors in...
You can't just let people do anything from userland, the performance would tank. As for restricting kernelland, EU competition regulators would not be happy if MS was the only one able to write anti virus software that runs in kernelland.
Well Microsoft did not publicly commit to using the same APIs, and no privileged access, for its own antivirus products. That's why the EU said no way; not because kernel access was revoked.
Yes, but then of course Microsoft is being obligated to open part of kernelspace to competitors, which is arguably "OK" from a competitive regulation perspective, but that then places a special burden on competitors to maintain code hygiene given the potential for crashes. It makes CrowdStrike's negligence all the more unacceptable.
I believe what philistine is suggesting is that Microsoft could have implemented their own security offering using a safer alternative like eBPF, and then opened that interface to competitors as well.
I think that would have been a proactive approach. That said, I'm not entirely convinced that the EU was right to place the restriction in the first place.
The article you shared says that Kaspersky filed a complaint, but I didn't see a clear statement there about what the outcome was. I do now see other reputable sources reporting that an agreement was reached in 2009 where Microsoft promised to allow vendors the same access to the kernel its security software had [0].
I think a proactive approach might have been for Microsoft to provide safer interfaces with the kernel, and then use those in its own security offerings.
That said, it does sound like EU competition regulation was a contributing factor here, and I think the EU is wrong on this one and that an OS vendor should not be required to provide unrestricted kernel access to allow security software vendors to compete.
Mostly unrelated, it seems somewhat interesting that this was Kaspersky insisting on kernel access... The US government seems convinced they are compromised.
Please don't post in the flamewar style to HN, such as you did here and downthread (https://news.ycombinator.com/item?id=41096774). It's not what this site is for, and destroys what it is for.
There are ways around this that I've discussed elsewhere so I won't repeat them here.
However, think of it this way: Windows restarts, tries to load with new patch and crashes.
Question: why can't Windows be designed so that on crash it automatically restarts and loads the previous state sans patch?
Answer: Windows could be designed that way but it would require Microsoft to do many things it doesn't want to do. Some of which would require Microsoft to go back to the beginning and reengineer quarter-century or more old code from scratch, that means redesigning APIs and the underlying architecture from first principles.
Why doesn't Microsoft want to do this? It's obvious so I won't bother to spell it out.
Nevertheless, when the dust fully settles and someone outlines these alternative design strategies in great detail then it'll be obvious to everyone what a fragile stack of cards Windows has been constructed on.
Maybe you should actually check the facts, instead of just making a witty remark? The EC has regulated Microsoft into product decisions to make third-parties as unrestricted as Microsoft itself. See here:
> (11) Microsoft shall make available to interested undertakings Interoperability Information that enables non-Microsoft server Software Products to interoperate with the Windows Client PC Operating System on an equal footing with Microsoft Server Software Products. Microsoft shall provide a warranty with respect to this Interoperability Information (including any updates), as specified in the general provisions in Section B.I of this Undertaking, effective 1 January 2010 for Windows Vista and Windows 7, and effective 15 March 2010 for Windows XP.
As someone that worked at MS, on a team that worked directly on this issue (among other things) some years ago, MS did figure out better solutions and did discuss it with industry.
Kaspersky was running an SSL/TLS Proxy in the kernel IIRC and didn't want to have to move it elsewhere due to the fact it would require them to rework their product quite a bit.
The solutions MS (we) proposed were agnostic and overall better, the anti-malware industry simply doesn't want to make the changes as these things do impose technical work on existing products.
No worries. That wasn't at all evident from the above complaint.
Was the drive for this industry forum coming from dealing with the EU, or was it more from MS trying to make things better without needing the prodding?
There is literally a ton of existing software out there that is keeping MS from doing exactly that. When it comes to avoiding breaking legacy applications MS scores far higher than any other operating systems out there.
And that has absolutely nothing to do with them coming up with better approaches then discussing them with industry for potential roll out, adoption, etc.
But instead, now they're in trouble they're trying to blame the EU for stopping their monopoly.
Do you honestly believe MS being unhindered by competition restraints would lead to better results?
Are you forgetting MS has already demonstrated how that goes, and been literally convicted for it?
Let me try to make it extremely simple so that maybe you might understand something.
Say I am running a shop, the EU tells me that under no circumstance can I not allow a product to be sold in my shop, even if that product is a ticking time bomb that can blow up the shop. And so hearing this, I create a document “Good approaches to sell time bombs”, and I mention helpful stuff like ensure the timer in your bomb is switched off when it is in shop. I also create an industry wide forum with all time bomb manufacturers and discuss best practices and time bomb methods with them to best sell it in the shop etc.
In spite of all this, there exists an idiot timebomb manufacturer who ignores all best practices, does not consider industry and builds a shitty time bomb that blows up the shop.
Now please educate me, apart from doing the only surefire thing and banning shitty time bomb manufacturers from selling in their shop, what should MS do?
Mmm… meaningless analogies are kind of meaningless?
More like:
If you install a security product that then prevents your car from starting; are they entirely blameless for letting you install it?
If you pull the hood up, tear off the “voids warranty” seal, ignore the “don’t open this” labels, crack the seals open and shove something into the engine… sure.
…but if you just slap a widget with the “vendor approved” sticker on your dash and it bricks your car; that’s a bit sucky right?
I do feel Microsoft is not entirely blameless in this.
It should be easier to recover from this kind of thing.
They should have been paying attention and made a fuss that one of the biggest security vendors has been doing this literally since they started.
I would bet money that until two weeks ago Microsoft was high-5ing them for best security practices.
It’s not “their fault” but they can’t just go “wasn’t us!”.
Before Microsoft comes into the picture the issues is crowdstrike pushing updates without proper testing, selling a product on which customers cannot control the update schedule, and customers for being so naives and not checking what the product they install on critical stuff do.
The big difference is that CS is not the user. In you analogy it's like your car allows you to drive off a cliff, and an (almost) essential part of your car (for example, the pedal) drives the car off a cliff.
It got there because a user or administrator approved and installed it. It didn't just appear there, Microsoft didn't install it there. The user ran it.
Right, so a slightly better analogy would be if you wanted to install a remote starter, but then you find out that they can only be installed into Fords, because other auto manufacturers (Apple, Linux in this case) believe that tampering with the critical path (the engine, kernel) is unsafe. It isn't Ford who's at fault for allowing you to run some random engine modification, it's that mod that is at fault.
Microsoft tried to lock down kernel access in the Windows Vista era. Antivirus vendors went crying to the EU and they forced Microsoft to allow access to the kernel to third parties.
I think if I understand the systems right Windows can roll back a bad driver update but the CS update wasn’t an update to the driver but instead updated a configuration file which CS updated outside of Windows Update. So from the Windows Update perspective the system started failing to boot with no changes to the system. Again though I don’t know if I totally understand what CS did and what capabilities Windows Update has.
No you can’t roll back bad driver updates in any OS, if you could then by definition they do not sit in the kernel space. You just want the security code to not run in kernel space, which is a decision MS could maybe make and become like Apple, though most security software would in that case rebel.
The OS loads file A into the kernel. It crashes. It reboots. It decides not to load file A this time.
Wow, it's a rollback of kernel-space code.
Unless your argument is that you can't guarantee a rollback of every possible kernel driver, because it might have installed a rootkit while it had full control? Okay, cool, but this isn't a malware removal idea. It's an idea for normal drivers.
it depends on how bad. in Linux you can rmmod to get rid of the bad one if you haven't wedged it and fix your code, compile, and try again. I can't imagine that's actually different on windows if you know what you're doing. how do you think driver development happens?