> The outage highlighted a different kind of digital divide. On one side, gmail, Facebook, and Twitter kept running, letting us post photos of blue screens located on the other side: the Windows machines responsible for actually doing things in the world like making appointments, opening accounts, and dispatching police.
At this point using windows for these tasks seems like using legacy software because training people to use an iPad or a web browser seems too complicated or because no one wants to move their age old systems to a more modern web based system because of costs. Native apps work great, but I think the world is moving to the cloud and that means web based everything should be the norm. Yes AWS AZURE outages can still happen but those can be fixed by spinning up a VM in different clouds.
This is also why software jobs aren’t going anywhere thanks for a while. Many systems need to be changed to more modern and robust clouds. It might take decades for this transformation across the globe.
Your “modern and robust cloud” is my “why on Earth doesn’t this thing work offline”.
The world is absolutely full of things that have worked for decades to centuries without the Internet, are eventually more or less consistent (remember carbon paper credit card machines?), and did an amazing job of keeping the world running despite, wars, network partitions (the “network” would basically always be partitioned), mistakes, entire branches offline, etc.
Sure, a lot of things are easier when centralized, and “the cloud” is incredibly powerful. But it’s not necessarily more robust. Also, depending on any sort of cloud means you’re also depending on the network, and networks are far from infallable. There’s a reason that a lot of stored-value transit systems still track balances on the card and will let people in even if a fare gate cannot connect to a cloud service.
And CrowdStrike took out plenty of cloud instances, and recovering them can be worse than recovering physical hardware, as the “robust cloud” has an absolutely terrible ability to do anything outside the happy path of booting an instance normally.
Okay this sounds all very reasonable, but how do you know when your washing machine is finished, when it's not connected to the cloud and you won't get notified in your app? It sure is not an easy thing and the cloud helps very much here
Well, for people in an apartment it doesn't matter all that much, but if your laundry washer or dryer is in the basement, you don't necessarily hear it if you're out in the garden.
Sure, it might be a "nice to have" thing. But the machines usually show how long they'll take. And even if it's a newer one with sensors that make the whole process vary in time. I'd still be like "Oh, okay it'll take about 3 hours, so ill be back at 6pm". It doesn't really matter if the clothes chill out for about an hour, especially the newer machines don't stink that fast. And on top of that, I don't think that it has to go over the internet if you needed some sorta notification. Local would be suffiecient.
If I buy something new like this and have a few choices, I intentionally pick the one with as few smart features as possible.
I think you are joking, but I'll reply with a serious answer.
Where I went to college, our dorms had (free) shared washing machines. This was "pre cloud", but wifi was throughout. One student rugged up a hall-effect sensor and attached it to each power cable. It could detect if the washers and driers were on. It sent this info to a specific website that the students could monitor to see if there were any available washers or driers.
I wish washing machines had a fixed cycle duration. When I start the cycle my washing machines tells me the same duration, always, but in actuality it takes different amounts of time every time. Madness. I've been told this is a feature.
It actually is. Fixed length cycles haven't been a thing for many years now - modern washing machines adjust the washing cycle length by the weight of the laundry and its behavior during spin-drying, both its vibration behavior aka weight distribution (that can have multiple adjustment cycles to achieve reasonably even distribution) and how much water it loses - when no more water comes out during spinning, it will cut the cycle short to save energy.
> When I start the cycle my washing machines tells me the same duration, always, but in actuality it takes different amounts of time every time.
If it says (e.g.) 43 minutes, but sometimes it takes 40 and sometimes 49 or 53, set your timer for 60 minutes and get on with life. Your laundry sitting for 17 or 7 minutes isn't the end of the world. If your timer goes off and it's still not done, set it for another 20 and do something else.
Of all the things to fill your head with worry and annoyance with, laundry is near the bottom of the list for me.
Except when you live in a building with communal washing machines and where you need to book time for laundry, as it is common in many European cities.
My washing machine is kind enough to both indicate time to end in minutes, but also allows me to delay start so that the cycle is finished in [x] hours. It's not even that modern.
My modern dishwasher is also very kind, and displays the time to end in minutes throughout the wash. Counting down from an hour. But I don't know what kind of upbringing it had, for some reason, the sneaky bastard always adds another 25 minutes, when there is supposedly only 10 minutes left.
I guess dishwasher years are like dog years. At least it definitely behaves like a teenager at 2 years old, finishing when it wants to finish. Estimates be damned.
My home assistant does approximately this without the cloud, but it isn't magic: cloud is just 'someone else's servers' and I just host it on my own raspberry pi.
What does this have to do with “the cloud?” If you want to make a washing machine robustly notify its user that it’s done, surely a message sent over the local network or even Bluetooth is a better start. Anything involving the Internet is only useful when the user is outside the house, and there are more robust solutions to that than a server in us-east-1 that you hope the manufacturer keeps paying for.
In all seriousness, I think never has there been a better time to educate people on the fundamental philosophy of computing freedom, and I usually start with Eben Moglen and RMS's talks with people.
I don't know how much of this is generational, or how much of this is corporate sell out, or maybe even sockpuppetry for consensus cracking and other psyop techniques, but relearning the lessons of early computing (such as being able to do things offline, locally, as a core part of a functioning decentralized system), seems highly in order.
This could have been fixed by having a minimal baseline of machines not running the same software
Resilience comes from diversity, in computing and in biology. Whether that's having critical workloads on multiple cloud providers or having one user interface on windows on network A (Arista) with crowdstrike and one on a mac on network B (cisco) with Sentinal one
Sometimes perhaps you can't eliminate a single point of failure, but you can sure reduce them to a minimum.
Or you can choose to increase next years bottom line and thus your bonus by not having a robust DR plan or system. You can also skip on boring things like raid and backups.
The trick for a CxO is to ensure that when failure happens, it's massive and widespread. Then it's not your fault. The CxOs in a given industry won't be fired because their DR plans didn't work because they believed Gartner and all their CxO chums in competitors did the same thing.
Nobody got fired for choosing IBM/Microsoft/Cisco/Crowdstrike/Azure, even if it's worse than the alternatives. People do get fired for bucking the trend even when it's measurably more reliable.
The update affected less than 1% of all Windows machines. [1] Although maybe the biggest software failure in history, far from the biggest possible one. The level of cloud connectivity in the world could basically break the world if we didn't have diversity.
Cyber attacks rarely take down stuff directly. Rather attackers will establish a bridge head into your organization first and inspect the network and gather data for further (phishing) attacks.
Diversity only means more opportunities to install bridge heads.
I'm not sure I follow, I doubt the web vs native implementation of an application makes much difference when the terminal used to access it is unavailable. A cloud based web-app is not much help if no one has a working computer and browser.
I'm not sure we're quite at the stage where a check-in agent using their personal un-managed devices to handle passenger data via a web-app is a great idea.
They might not need them, but I'd be surprised if at least some companies don't install security BS on them anyway (just like they do on Linux machines), because of compliance reasons. It can't hurt, can it? (at least that was what most IT departments thought before CrowdStrike)
Hmmm, my experience is vastly different. I wanted to make a graph using 5 cols of data, first col is x labels, then data for 4 lines. The cols are not next to each-other in the sheet. Then add linear fits trough that. Then give specific html colors (woops no impossible) custom colors and line types to the original lines and the fitted lines. It's possible but the ui is terrible. Changing line type is simply bugged half of the time.
> training people to use an iPad or a web browser seems too complicated
iPads aren't designed to be turned into kiosks or airport departure displays and web browsers aren't operating systems (except maybe ChromeOS). So this advice boils down to don't run Windows, but CrowdStrike has caused outages of Linux as well.
At this point using windows for these tasks seems like using legacy software because training people to use an iPad or a web browser seems too complicated or because no one wants to move their age old systems to a more modern web based system because of costs. Native apps work great, but I think the world is moving to the cloud and that means web based everything should be the norm. Yes AWS AZURE outages can still happen but those can be fixed by spinning up a VM in different clouds.
This is also why software jobs aren’t going anywhere thanks for a while. Many systems need to be changed to more modern and robust clouds. It might take decades for this transformation across the globe.