> Having KARL on other OS platforms would greatly improve the security of both Windows and Linux users.
This is surely true, but at least on Windows the central security holes do not lie in Windows itself (these kinds of holes exist - but exploits are very expensive, which shows that they are typically rare and not easy to exploit), but in third-party applications.
For example the current 2017 version of the Petya ransomware was spreaded via a security hole in the software update mechanism in the Ukrainian tax preparation software M.E.Doc. Other well-known attack vectors that are commonly used to attack Windows PCs are Flash Player and the Java browser plugin.
Well, it's true that the initial vector is often third-party software. But once you're able to run arbitrary code in a user-mode process running in a limited security context, you still need to attack some high-privilege component to get full control of the machine. Usually this component is the kernel, so additional kernel mitigations do help protect you.
This is so wrong that I have no idea why anyone would make such an assertion. Win32k.sys alone is a bottomless pit of EoP vulnerabilities. They are a dime a dozen and not particularly difficult to exploit. Also, in the context of kernel security, tax software, flash, and java browser plugins have no relevance.
> They are a dime a dozen and not particularly difficult to exploit. Also, in the context of kernel security, tax software, flash, and java browser plugins have no relevance.
Indeed, but I wanted to illustrate that while kernel security is important, there exist much more dangerous "open barn doors" (I don't know whether this English translation of the German phrase "offene Scheunentore" is proper English).
Oh, that is fair. Kernel security is indeed largely inconsequential in the real world. My initial read of that has it sounding like saying Windows (kernel) doesn't have exploitable vulnerabilities, third party software does.
I still disagree, but less strongly :) Flash has always been a weak point, and Java was (but has not really been hit for a few years). But not only have there been exploits hitting MSIE/Edge/Office, they deserve much of the fault for the poor security architecture that facilitates exploitation of plugins in my opinion. Like untrusted fonts in the kernel, they seem to agree in so far as Edge no longer supports ActiveX at all.
The number of exploits overall has gone way down, but there are still a ton of security patches rated as Critical RCE coming out monthly in all the usual Windows targets. And now that Tavis shone some light on their AV engine, it has been revealed that is a gaping hole both in design as well as in implementation.
Regardless, there are far more practical realities that make Windows a security liability. If you survey 100 random penetration testers, you might find one that uses RCE exploits regularly (before shadowbrokers gave everyone new toys anyway). The playbook for everybody else largely consists of spear phishing to get a "beachhead" and then moving laterally with Pass-the-hash and similar things that are technically possible to defend if you read the documentation and set the right group policies, but that nobody in the real world does.
Was it really a mistake in a practical sense? Does Microsoft move their consumer base to the NT kernel via XP a few years later (a massive win for their platform overall) if GDI performance had to take a big hit?
Mark Russinovich, before Microsoft hired him, once demonstrated that moving the GDI into the kernel wasn't necessary for performance.
Windows NT was a nice, clean system from Dave Cutler, but wouldn't run a lot of code that ran under Windows 95. Especially 16-bit programs, which ran in a compatibility box under NT which was not tolerant of 16-bit programs doing things they were not supposed to be doing.
XP put a lot of marginal Windows 95 code in the NT kernel and supported bad 16-bit programs. It took a decade for Microsoft to dig out from that mess.
This is what many people on the internet say. This does not mean there there might be good arguments for the opposite standpoint, too.
At least I can tell that Microsoft is working to move parts of GDI step by step from the kernel back to user mode again, which should provide evidence that they consider this decision as a historical mistake, too, because it opens too many potential gateways for security flaws.
> which should provide evidence that they consider this decision as a historical mistake, too, because it opens too many potential gateways for security flaws.
Or perhaps it was the correct decision at the time, but now (decades on, with computing power orders of magnitude cheaper and security vulnerabilities orders of magnitude more expensive) a different decision is appropriate?
I don't know about that... plenty of other operating systems didn't render fonts in the kernel at the time, and they seemed fast enough. Let's just say it was a mistake but that we can be happy Microsoft is now fixing it.
> plenty of other operating systems didn't render fonts in the kernel at the time, and they seemed fast enough.
Which ones? I'd be surprised if either of classic MacOS or BeOS didn't have the display layer in the kernel; Solaris had it in userspace but was pretty slow; BSD was still tangled in lawsuits and Linux barely existed.
> Yea, but they seemed fast enough with rendering fonts in the userspace (xfstt).
I don't remember there being enough GUI applications around on Linux/BSD to be able to talk about whether font rendering was fast or slow. Anything that used motif was slow, netscape was very slow. xterm was fast but fixed-font.
Classic MacOS can't really be said to have a kernel, as it had no privilege separation and no security whatsoever; user applications ran with full access to the hardware, and used cooperative multitasking instead of being preemptable. The QuickDraw routines were in ROM but the only difference that made is you had to modify a jump table instead of being able to overwrite them directly.
(All of the above supports your point; it's just that the Mac went further than you implied.)
The most amusing part of this was in NT 3.x they made a big deal about everything not being in the kernel in all their marketing vs Netware where a bad NLM could crash a Netware server.
Happy to hear they've got it sandboxed further than just "user mode".
Whenever I hear fussing about code running in userland vs kernel I think of this comic: https://m.xkcd.com/1200/
Note that this is only the case for untrusted fonts. Trusted fonts stayed in the kernel, which suggests that Microsoft still does believe the performance reasons are valid.
Both are correct. The NSA SMB exploit is typically ineffective for initial entry into a network because SMB is almost always blocked at the network boundary but almost never blocked internally. So both Petya and WannaCry had different means for the initial infection, then used SMB attacks to wreak havoc once inside. WannaCry was initially delivered using plain old email attachments, and Petya was delivered via a software update through a hacked update server.
This is surely true, but at least on Windows the central security holes do not lie in Windows itself (these kinds of holes exist - but exploits are very expensive, which shows that they are typically rare and not easy to exploit), but in third-party applications.
For example the current 2017 version of the Petya ransomware was spreaded via a security hole in the software update mechanism in the Ukrainian tax preparation software M.E.Doc. Other well-known attack vectors that are commonly used to attack Windows PCs are Flash Player and the Java browser plugin.