The ECB mode in question is depreciated and has been such for a long time. So normally you would expect the issue would arise only if you had an old version of the software. So is the article claiming that new versions of the software are using the depreciated mode in error? It isn't clear to me how to get access to any more definite discussion (MSRC VULN-060517?). We need a definite list of exactly what software is generating ECB messages.
It is hard to imagine that anyone knowing anything about cryptography can make such a mistake as using the ECB mode of operation for the encryption of a text message.
The ECB mode of operation is strictly intended for the encryption of random or pseudo-random numbers that have a negligible chance of repetition, for example for the encryption of secret keys.
Any other application for ECB is a serious error that cannot be justified in any way.
If this was some kind of attempt of implementing a weak encryption method for export purposes, then it is much more stupid than limiting the length of the secret keys to 40 bit, like it was done in the Internet browsers many decades ago.
I talked to a software developer once that had used ECB because it was the only way they could think of to allow an arbitrary block of ciphertext to be decrypted without decrypting the prior blocks (i.e. they needed random access as opposed to decrypting the entire ciphertext every time).[1] A lot of the older cryptography libraries I know of only supported ECB and CBC, and a lot of older programming examples only discussed those too, so that dev may not have ever heard of counter-based modes even though those have been around since 1979. Not sure if that's anything like what happened here, but if this feature has been around for awhile maybe it was a similar case?
[1] I've talked to a lot of software developers about the dangers of using ECB over the years. The discussion I'm referring to was the only time I've really been surprised. Usually it's been because the library they were using defaulted to it.
To be more clear for those less familiar with the operation modes of block ciphers, the CTR mode allows encryption or decryption at arbitrary positions in a text, in a random order, exactly like the ECB mode.
Moreover, the implementation of the CTR mode is trivial when an ECB function is available from a library, a CTR encryption or decryption is done by invoking ECB on some non-repeating value, e.g. the value of a counter incremented after each encrypted block, and computing the sum modulo 2 (a.k.a. XOR) with the plaintext (for encryption) or ciphertext (for decryption).
For random access, an appropriate offset is added to the counter.
When the threat model for a SSD or HDD is only that it might be stolen, the CTR mode is a perfectly secure method to encrypt the SSD or HDD, using the sector number as the counter value.
The slower and more complex encryption modes that have been standardized for full disk encryption, and which are used in most full disk encryption programs, are conceived for the use against a much more powerful opponent, who is not just a thief but who is someone able to access the SSD or HDD frequently, for many weeks or months, without the knowledge of the owner, and who will be able to frequently make snapshots of the SSD/HDD as it evolves during that time, to be analyzed for values that change in time, which can reveal differences between the corresponding plaintexts, which might allow the plaintexts to be guessed (i.e. if a text file is edited frequently, the content of the parts that change between versions might be guessed by the opponent who watches the SSD).
This threat model is valid only for someone who is a person of interest for the surveillance by some TLA.
It does not take a state-level adversary to watch changes in a disk over time. It's a pretty basic cryptanalytic setting to operate in. You're talking about XTS vs. CTR; the problem with XTS is that it leaks in way that somewhat rhymes with what ECB does, and of course the much larger problem is that none of XTS, ECB, or CTR are authenticated --- an adversary with durable access to your disk won't bother trying to cryptanalyze it, because it's easier to compromise one of the executable binaries running on it by manipulating the ciphertext.
All of these are reasons why you want to avoid using full disk encryption outside of the narrow problem of your laptop getting stolen out of the back of your car or something.
I agree that full-disk encryption (i.e. encryption inserted in the block device driver) is not the right solution for storage encryption, and also the encryption at the file level is not a solution.
The right place where storage encryption must be inserted is in the file system implementation, where it is possible to implement a completely secure form of authenticated encryption. This is especially easy to do in the so-called copy-on-write file systems or log-structured file systems.
However, file system implementations are much more complex than block device drivers, so modifying them to insert encryption in the right way requires a lot of work, which nobody has done for any of the popular file systems.
While there are some commercial solutions that claim to be secure I have not seen anyone that manages the secret keys correctly (no secret keys should be stored in the encrypted device itself, regardless if they are encrypted with a password, because encryption based on a password is much weaker than with a long random key).
If you just prepend a counter you can very easily sidestep the issue. Be sure to lookup though the behavior of the encryption algorithm if a part is known, predictable or chosen by an attacker.
> it was the only way they could think of to allow an arbitrary block of ciphertext to be decrypted without decrypting the prior blocks (i.e. they needed random access as opposed to decrypting the entire ciphertext every time).
CBC mode does support that, though (decryption is parallelizable, but not encryption).
There was an issue that slipped into the iOS backup a while back, where some engineer had added a bare sha1 hash of your password (it's been a while, but I think they were trying to add encryption of the metadata in addition to the existing file encryption).
The existing encryption was GCM, AES keywrap, and pbkdf2, so those things were done by teams of widely varying skills.
I probably could have gotten another CVE from that, but I wasn't sure I was supposed to be poking around in that stuff in beta. Somebody else did report it and it was fixed.
You're allowed to assume the phone network is not being eavesdropped, you're not allowed to assume that for Internet transmission. Whether that's true or not, that's the accepted position.
If this encryption genuinely allows structural data retrieval - eg common invoice templates or medical reports - then this is absolutely a HIPAA problem, and once a facility is made aware of it they would need to respond.
FWIW, this message encryption is also accepted by UK NHS, in fact it's virtually the standard.
The mitigating factor is that typically the user is fully authenticated before being allowed to access the encrypted data...
Explain to me how this is a bigger risk than just physical burglary and stealing a computer with encrypted drives? The effort required to MITM and take advantage of this exploit is considerably more costly and time-consuming than just breaking into a building.
AS OP mentioned, this is not about assumptions but laws and their interpretation during consideration in a court of law.
The amount of sensitive data Microsoft must be collecting from all manner of organizations is insane. As far as I can tell companies just don't care at all about the data they leak to third parties anymore. They seem to either trust that they'll know when that data is abused or leaked or that they can lawyer their way out of being responsible for any fallout. It must be an endless gold mine for MS though to have that level of insight into what damn near every corporation is involved in.
Not really, the current method of secure Microsoft “email” via Exchange Online is receiving a message that you have “secure mail”. You then have to click on a link to view the message stored on a webserver behind SAML/LDAP auth with a username/password hopefully provided by offline methods. Usually best if behind corp vpn, but that is going away now.
Sounds spammy, but this is industry norm from personal experience on Ironport and Exchange. Downside is the spam imitations of this that are likely quite successful at phishing creds from corporate users.
[MS-OXORMMS] reuses cryptography from [MS-RMPR] as the offending 15 year old specification that allows ECB to be used (as well as padded and unpadded CBC) for encrypting the content blob within the message.rpmsg attachment to "protected e-mails". As the original article states, Microsoft have kept using ECB for backwards compatibility with their 13 year old product Office 2010 that reached end of life 2 years ago (Oct 2020)[1].
Despite all the talk about "Purview Advanced Message Encryption" replacing whatever came before it, a recent demonstration[2] shows attachments labelled "message_v4.rpmsg"[3] and "message_v3.rpmsg"[4] being sent to an external Gmail recipient so it appears [MS-OXORMMS]/[MS-RMPR] are still being relied upon but perhaps the server part of [MS-RMPR] is now only implemented by Azure and has no on-premises implementation?
In one of the examples[4], the "Purview Advanced Message Encryption" feature allows "message_v3.rpmsg" to be sent as an attachment to an external recipient that then doesn't have permission to view the e-mail. Why send an encrypted message to someone that is meant to have no means to decrypt it?
I don't know why more people haven't picked up on this, but this only applies to OME, which is the old Microsoft 365 encryption. Their new version with Microsoft Purview doesn't support legacy Office or ECB.
If you are wondering why this is a big deal, then I encourage you to complete Set 2 of Cryptopals [0], because it guides you through quite a few (easy) attacks you can create against AES-128 in ECB mode.
Not that I want to talk down the importance of not using the block cipher default mode, using it in this setting is a bug Microsoft should urgently fix, but those attacks are mostly relevant in interactive settings, not against data at rest.
Here, the probably mostly in fact is that you can see penguins through the encryption.
I am seeing they claim the issue is with Microsoft Office 365 Message Encryption. Does this factor in if people have Azure Information Protection services/licensing? This is quite the vulnerability if Azure Information Protection is unable to fully secure email communication.
Problem is: If it says encryption, people think it's safe. Only if you're into tech, you know to look for the details. Personally, I'm always looking for 'end-to-end' and 'open source', like Tutanota does it, for instance, of Proton.
Microsoft is one of the least careless companies about security in the entire industry, and particularly so when it comes to cryptography. It's just a very big company, and they don't (or didn't, last I checked) have a cryptography review board the way Google does, to make sure people like Niels Ferguson get their eyes on all their crypto-bearing features.
> don't (or didn't, last I checked) have a cryptography review board the way Google does,
Exactly my point. Every company follows standard security policies. But for a company like Microsoft, given the responsibility it has and the profits it makes and the amount of money it’s executives make is it unreasonable to expect more?
Doesn't really matter - the pre-sales checkbox mumbling something about "encrpytion" was ticked, so Compliance, Legal, and SecOps are OK with it. Nothing to see here, move along :-)
That article doesn't even mention they release new services without IPv6 support. Azure Flexible Postgres can't be ran in a virtual network with IPv6 enabled (let alone support IPv6 operating mode/addressing)
I realise I’m replying to a link to my own comment…
In the context being discussed here it should be more clear that big vendors implement features like encryption or IPv6 purely to tick a compliance checkbox.
These types of features are not supposed to be used. They’re there just to exist and enable sales to government institutions where someone who who’ll never use the product directly is ticking things off on a checklist.
Another classic example is Android encryption. The marketing materials literally look like this:
Encryption: Yes
I didn’t realise how much of a farce this was until I saw a presentation at an Apple developer conference where they outlined the four different “rings” of encryption they do. For example, when an iPhone boots up from cold most of it remains encrypted and requires the pin to decrypt. Even if you take it apart physically before the pin entry, you can’t get at the user data!
Android unlocks everything on boot, because its encryption support is “yes”.
Wowza. I would ask a sort of rhetorical question about how could Microsoft let this happen. Then quickly realize that is Microsoft and I'm not all that surprised.
The icing on the cake?
"The report was not considered meeting the bar for security servicing, nor is it considered a breach. No code change was made and so no CVE was issued for this report."
So basically, they know about it and don't seem to give a shit.
I'm guessing that the block size is a power of 2, but it's not divisible by 3, meanwhile the pixel size (and so the line width) is divisible by 3. Misalignments in a line will cause diagonal patterns.
Correct: at least for the recreated version[1], the blocksize is 128 but the pixels are uncompressed 24-bit RGB values. So you end up with 5.33 (repeating) pixels per block, which makes running spans of of the same color transform into runs of a different color with a little bit of "noise" (always the same) at the end.
Do that enough and wrap it around at the width, and you end up with a diagonal/houndstooth-like pattern.
Wait til you find out there are people with VK accounts supporting the war in Ukraine working for Microsoft's top-level management for security and compliance.