Isn't this what Anker has largely done. In a world of might be good/might be crap cables, chargers, batteries, etc. You can always select the Anker variety on Amazon. It'll cost you a bit more than whatever random product, but you know they are reliable. It's priced much cheaper than an OEM (Apple, Google, Samsung, etc.) accessory but is more reliable (quality wise) than no-name accessories.
> When storing our unit, we noticed that the yokes didn't allow the ear cups to lay flat on the table. It also seems like pressing them down puts pressure on the yokes, which can mean that this part may get damaged over time if you're constantly folding and unfolding them to store in their carrying case. While our unit hasn't had issues, there are reports (for example, here and here) that the hinges and headband can crack.
You will find countless reports of hinges breaking after a few months of light use about every model that came out in the last 5-10 years, yet nothing has been done to fix it.
I would assume Anker chargers and cables are high quality, and simultaneously assume anything else of theirs is low quality and just a way to disproportionately profit off of the brand’s reputation.
We need to be careful here. Because we still want people to sell off potentially productive land. Lets say I buy two lots along a highway with growing traffic because I believe in the next 5-10 years it will be a good spot for a gas station/convenience store.
So now 5 years later the highway is developing and I build out my location. I decide I want to sell the other lot since I'm not going to build on it.
Now, I don't need to sell it. I could hold on to it, it's not that expensive to own. It would be a great spot for a fast food joint or what not. But I don't want to sell it to someone who immediately develops it as a gas station.
The public would benefit from developing the land - new services and more tax revenue. So it's in the public's interest for the owner to sell.
So you want to allow some amount of anti-competitive restrictions. I get not wanting permanent non-removable restriction.
We can probably define a legal conservation easement as one that prevents further development on a piece of land, as opposed to regulating the uses and types of development.
There is a difference with an HOA. The HOA can be removed if enough members agree. It's not a restriction added by a past owner that can't be removed by the current owner.
Essentially most consumers just didn't care about quality and preferred the lower price over the higher quality product. That seems like normal consumer choice. The people didn't get poorer all of a sudden when the corporate breadmakers came along - the people simply decided they would prefer to get an inferior product at a lower price.
The reality is most people don't want to spend more money for higher quality goods. Or, if they do, it's on a limited set of goods based on personal preference.
Yes. And child psychologists will tell you that babies and children crave consistency, variety is not their thing. In fact, one of the reasons we as parents (adults) expose them to variety is so they develop an understanding of how to cope with inconsistencies. And it's always a careful balance, small amount of variety but mostly consistency.
And I can tell you most little kids love hard boiled eggs which I suspect is because they are incredibly consistent in texture and taste, unspoiled by some cook seasoning them or cooking them differently.
If this is entirely build using open-source software why not open source the site itself? Especially if you aren't planning to turn it into a commercial service.
Really great concept and execution seems to be pretty good. I'm a likely paying customer except that you don't support Microsoft 365. So I can use it for all my personal stuff which is GMail but none of my businesses which all run their email through Microsoft 365.
Awesome! We're rolling out Microsoft 365 really soon, starting this week :). Would love to hear about what services and workflows are most important for you.
There are entire systems engineering courses focused on failure resulting from a series of small problems that eventually in the right succession result in catastrophic failure. And I think we can say this was a catastrophic failure.
Think about it, first you need a race condition, and that race condition has to result in the unexpected result. That right there, assuming this code has been tested and is frequently used, is probably a less than 10% chance (if it was frequently happening someone would have noticed.) Then you need an engineer to decide they need this particular crash dump. Then you need your credential scanning software (which again, presumably usually catches stuff) to not be able to detect this particular credential. Now you need an account compromised to get network access and that user has access to this crash dump and the hacker happens to get to it and grabs it.
But even then, you should be safe because the key is old and is only good to get into consumer email accounts...except you have a bug that accepts the old key AND a bug that didn't reject this signing key for a token accessing corporate email accounts.
This is a really good system engineering lesson. Try all you want eventually enough small things will add up to cause a catastrophic result. The lesson is, to the extent you can, engineer things so when they blow-up the blast radius is limited.
> that eventually in the right succession result in catastrophic failure.
With a caveat that when it comes to security the eventual succession doesn't come as a random process but will be actively targeted and exploited. The attackers are not random processes flipping coins, rather they can flip a coin that often lands on "heads", in their favor.
The post-mortem results are presented as if events happened as a random set of unfortunate circumstances: the attacker just happened to work for Microsoft, there just happened to be a race condition, and then a crash randomly happened, and then the attacker just happened to find the crash dump somewhere. We should consider even starting with the initial "race condition" bug, that it might have been inserted deliberately. The crash could have been triggered deliberately. An attacker may have been expecting the crash dump to appear in a particular place to grab it. The attacker may have had accomplices.
The other frightening possibility is that the attack surface targeted by persistent threat actors is so large that a breach becomes certain (the law of large numbers): when you have so many accounts owned that one of them will have the right access rights; when you have so many dumps one of them will have the key; etc ...
> The post-mortem results are presented as if events happened as a random set of unfortunate circumstances: the attacker just happened to work for Microsoft
Does it say that?
> the Storm-0558 actor was able to successfully compromise a Microsoft engineer’s corporate account
Race condition is the reason we all use to explain to management why we wrote a stupid bug. Everything is a race condition: "the masker is asynchronous so the writer starts writing dumps before the masker is setup" sounds like a completely moronic thing to do. Say there is a race condition, and people say "a less than 10% chance from happening", but what do we know, maybe it happens each big crash, and it just doesn't crash that often.
Why isn't it masking before writing to disk ? God only knows.
Crash handlers don't know what state the system will be in when they're called. Will we be completely out of memory, so even malloc calls have started failing and no library is safe to call? Are we out of disk space, so we maybe can't write our logs out anyway? Is storage impaired, so we can write but only incredibly slowly? Is there something like a garbage collector that's trying to use 100% of every CPU? Are we crashing because of a fault in our logging system, which we're about to log to, giving us a crash in a crash? Does the system have an alarm or automated restart that won't fire until we exit, which our crash handler delays?
It's pretty common to keep it simple in the crash handler.
Unknown unknown catastrophic failures like this one have always happened and will continue to happen, that's why we need resilience which, probably, means a less centralised worldview.
Which should probably mean that half (or more) of the Western business world relying on Outlook.com is a very wrong thing to have in place, but as the current money incentives are not focused on resilience nor on stuff like breaking super-centralized Outlook.com-like entities down means that I'm pretty sure we'll continue having events like this one happening well into the future.
Indeed. While reading that I thought to myself “gosh, that’s a lot of needles that got threaded right there”. It feels like the Voyager Grand Tour gravitationally-assisted trajectory… happening by mistake.
A lot of accident analysis reads like this (air accident reports especially tend to read like they've come from a writer who's just discovered foreshadowing). And often there's a few points where it could have been worse. There's a reason for the "Swiss cheese" model of safety. The main thing to remember is there's not just one needle: it's somewhere between a bundle of spaghetti and water being pushed up against the barriers, and that's before you assume malicious actors.
Yeah I get that, it’s not a single Voyager, it’s millions of them sent out radially in random directions and random speeds and one or two of them just happen to thread the needle and go on the Grand Tour. It’s just an impression. Plus as you say there’s the selective element of an intelligence deliberately selecting for an outcome at the end (which confusingly is also a beginning).
"reducing your blast radius" is never truly finished, so how do you know what is sufficient, or when the ROI on investing time/money is still positive?
> - Not enough log retention in the corp environment to track a 2 year old infiltration.
It didn't say that Microsoft couldn't identify that infiltration had occurred just that they didn't retain the logs to prove to exfiltration. That makes a lot of sense, maintaining access logs is one thing but to retain the detailed logging of every file action by every user on a 100k+ user corporate network long-term would be a massive amount of storage, of fairly limited value.
Even in this case, it might be nice to have but it wouldn't change any of the major findings you care about if you are Microsoft: that a bug allowed a key to be written to a dump file, that the scanning tools didn't detect the key in the dump file, and that the authentication process didn't properly check the keys.