Hacker News new | past | comments | ask | show | jobs | submit login
The Linux kernel hidden inside Windows 10 (github.com/ionescu007)
333 points by eDameXxX on Aug 5, 2016 | hide | past | favorite | 163 comments



Fun tidbit is it seems the original reason behind this was Android support in WinPhone10 but when that was axed they migrated it to Ubuntu on desktop.

If true that's a pretty nifty pivot.


Indeed, and the rumours are that Project Astoria worked too well which scared Microsoft. If Android apps "just worked" on Windows then why would developers bother using their new Universal Windows Platform.


That's a shame. I mean, I get the concern, but full Android compatibility would have done wonders to adoption, which is what their phone platform desperately needs. They could have always lured developers later with a substantially better UWP on top of it.


I don't believe it would drive adoption at all. Microsoft know Windows Phone today is dead. However they still have the desktop and with UWP they hope to bridge the development gap between mobile/tablet/desktop/laptop.

Had they shipped Astoria in Windows 10 to allow Android apps to "just work" it would have destroyed their UWP strategy.

So they might get a few more Windows Phone users but they would have lost control of the new development platform and given it to Google on a silver platter. And they would never have got anywhere near the numbers Android is at.

I would be shocked if Microsoft don't bring UWP to macOS, Linux and Android (I don't see how they can with iOS) in the not-too-distant-future.


> Had they shipped Astoria in Windows 10 to allow Android apps to "just work" it would have destroyed their UWP strategy.

Kind of like how OS/2 support of Windows applications helped destroy it?


The OS/2 2.0 fiasco is a lot more complex than this.


Yes, but being able to run Windows 3.1 apps on it, with Windows 3.1 look and feel, at a higher price than OS/2 (or Windows) alone sure didn't help it.


You know that MS did the old OS/2 2.0 SDK betas from 1990, right?


as far as i remember: the PC of the day was just too slow for OS/2 warp, and it didn't support the varied hardware of the day; am i wrong?


Yes, there is much more history to this fiasco than just this.


Regarding iOS, the purchase of Xamarin and the direction of their tooling could be made to bridge that gap as well... Not that it will, but very well could.

Would be nice to see Apps from VS running in Linux and macOS. There's been some effort that one assumes supports this direction... Also, all things to make Azure nicer to use and better as a target are in their long term interest.


Exactly, that is why they bought xamarin in first place. Xamarin was key product. They will push Xamarin as hard as they can in all platforms simultaneously and since they are biggest software company in planet, I see a bright future for Xamarin.


I agree, Xamarin is what will bring UWP to all the major platforms.

My question mark over iOS is because of how strict Apple are with approved apps. Will Apple like Microsoft bringing "app parity" into the iOS eco-system and more importantly will they allow it?


Isn't WP already dead? I've never met a single person using one.


I've had 920, 930 and I'm now using a 950 as my main phone: The hardware is superb, and I really like the OS (running Insider Preview Slow Ring). There has been a steady stream of Windows 10 Mobile Insider Preview updates throughout the last year, bringing both new features and stability. In user interaction and interface consistency it is now much closer to iOS (or what iOS tries to be) than Android is.

That being said, it is pretty obvious it is a minuscule platform; apps are often lagging behind their iOS/Android counterparts, and there are some obvious ones missing (like Snapchat and Pokemon Go).

It is kind of sad really; I think it would be healthy with more than two major players, and Windows 10 users will probably feel quite at home in Windows 10 Mobile.


I tried using it for a year. Though the UI was polished, lack of apps was a big problem. For example, What's App Chrome WebApp support wasn't released for WP when the feature was launched. Even Audible and Kindle apps (considering these are from a major company like Amazon) on WP felt inferior to their Android counterparts. So just ended up buying an Android device around a year ago.


That's the saddest thing with my Palm Pre. It's a very refined UX without any current app support.


I have a Lumina 950 as a “second phone” for use with a local SIM when I travel, and honestly I love it.


I use it. I would rate it equally as high as the iPhone minus all the apps and apple craziness. And much better than android which I don't care for. I plan on getting the flagship 950xl Lumia when this 930 im on now breaks, if it ever does.


Which is really unfortunate.

I can't blame Microsoft for it, but the app-gap between WP/WinMo 10 and Android is getting untenable for me. I'm thinking about switching back, not because I think Android is better (I don't think it is at all), but simply because I'm starting to feel left out when all my family and friends are using, e.g., SnapChat to keep in touch and I...can't.

It particularly riles me up that there were perfectly functional third-party Windows Phone Snapchat apps, but Snapchat demanded that they be removed and still (evidently) declined to make a WP app.

Not to pick on Snapchat, it's not an uncommon story.

If I could even just download the Snapchat APK and sideload it on my Windows phone, that'd be enough for me. But I can't, since Astoria got killed.


There's a couple really good reasons why they can't do that, trademark, PR and user trust.

They need to defend their trademark, or it will be genericized. Not that that's a big deal, but it can be a thing.

More importantly, if the windows store is filled with crappy snapchat apps, it hurts their brand. If they release an update that breaks those apps, it hurts their brand. It's a no-win situation. Then there's hacks of third party apps, that are spun as being Snapchat's problem: http://www.cnn.com/2014/01/01/tech/social-media/snapchat-hac...

Snapchat is in a particularly bad position for the last bit, because what they're selling is fundamentally a lie, and having that lie exposed hurt them a lot.


Can you explain what it is about the operating system that makes a big difference? I have a mix of Android and iOS devices and I really don't have a preference. I never sit down with my tablet to use Android, it's to see the Snapchat my daughter sent me or to play Sequence or to check my email.

Likewise I'll pick up my iPod touch when I want to listen to a podcast, browse Instagram, or use Find My Friends.

There are hardware differences that make me use one device for it's camera or more reliable bluetooth, but I don't really associate that with the operating system.

Do you use a Windows Phone device because of the operating system or because (for example) it has an excellent camera or does something special with your XBox?


Sure. There are a couple things. Some of them may be be pretty petty, and some of them are something you can get in Android but aren't that way out of the box.

- app organization. One swipe and I'm looking at an alphabetized list of apps I can jump to by letter. This is huge to me, so much easier to find and identify an app versus a 2d grid dominated by icons.

- live tiles. My email tile shows me unread emails, photo tile shows pictures, calendar shows appointments, etc. Small thing but makes life a little simpler and makes my home screen look nice.

- settings are organized and laid out in what seems to me to be a much cleaner and logical way. I think this may have regressed some in win 10.

- maps are offline by default. Saves on data and helps out a lot when going through poor signal areas, which I do a lot.

- Cortana is super awesome, ime works better than google now.

- I can deny apps individual permissions.

- OneDrive integration. I didn't use OneDrive prior to buying the phone but since it was auto installed on my win computers i started using it and its very convenient.

A lot of it just all the minor usability and ui struggles just aren't there. To me it's a lot more polished and easy to use. I may have bad taste. :)


One drive doesn't sync files the way dropbox or gdrive do and it's super annoying. Cortana is no better than Siri or Now in that it basically just sets a timer or dials a contact and everything else in my experience is chock full of irrelevant links and spamming listings. The one thing I actually appreciate with Windows 10 in particular is that the Weather Live tile is good and doesn't have Accuweather or the Weather Channel banner ads in it. I am glad they went with just giving you with weather. My LG tablet's weather app is a banner ad crapfest and my iPhone's weather app is pretty much useless by comparison. Why can't everyone just show me whether it's going to rain in the next 15 minutes without subjecting me to ludicrous ads. Seriously. Thank you Windows for that at least.


The HTC Sense weather app manages to do that, despite using Accuweather as the data source. There's just a little "Accuweather - more details online" bar at the bottom of the screen, but I've never felt the need to actually follow it.


What do you mean it doesn't sync the files the way they do? Can you explain?

If I take a picture on my phone, it syncs to my laptop and desktop. If I put something on my computer in my OneDrive folder photo folder, it shows up in the photos on my phone.


Well the main problem I had with One Drive, which is why I started using Google Drive and Dropbox exclusively was this issue: https://onedrive.uservoice.com/forums/262982-onedrive/sugges...

You share a folder, update a file and then the other person does not get the update to that file, so then they are sitting there on the phone with you saying "no I don't see the updated file" and you have to have them login to the web version and basically download the file to their one drive. That happened to me several times and I just gave up on One Drive. I don't know if they fixed that. Yes, it syncs things for YOUR folders but it will not sync things to shared folders.


- maps are offline by default. Saves on data and helps out a lot when going through poor signal areas, which I do a lot.

Note that Here maps with offline maps is also available on iOS and Android:

https://pages.here.com/app/


- solved by any launcher

- solved by any launcher

- cannot comment

- gmaps handles this automatically

- cannot comment

- baked into android since L

- gdrive

Nearly usability and UI struggles are easily solved with a custom launcher.


- solved by any launcher - solved by any launcher

Then you have never used Windows Phone. None of the Android launchers provides the equivalent of WP tiles with the same amount of integration. There is no API that launchers could use to pull out the same amount/types of information.

- gmaps handles this automatically

Nonsense. It does some prefetching, but if you are in another country (no data) and you go slightly off-route, there is no maps coverage anymore. Luckily, new versions of Google maps allow you to download offline maps ahead of time as well.

For me the reason to switch if I was still on WP is that the app state is so deplorable that even Microsoft's apps on iOS and Android are miles ahead of WP's counter parts.


"There is no API that launchers could use to pull out the same amount/types of information."

What's missing in Android compared to WP in this regard? I have widgets on my Android homescreen that has buttons to activate app functionality (e.g. Audible's 'play' button) and that dynamically update their content. Is there some richer interactor or viewing scheme wp supports?


Interestingly enough, Microsoft actually has an Android Launcher called Arrow. It's not an attempt of bringing the WP experience to Android though.

That said, it's a great launcher and I use it on my Note 4. I still don't fully understand the strategy behind why Microsoft released it though.

https://play.google.com/store/apps/details?id=com.microsoft....


You should probably look at OSM (eg. OSMAnd, but there are others). I just download the whole country map before I visit somewhere.


I use OSM on my Garmin GPS for hiking/cycling, especially when Garmin's maps for a particular country are overly expensive.

http://garmin.openstreetmap.nl


HERE maps for android is pretty nice - ar least for traveling in europe there is a very good ui for downloading regional maps and it performs offline routing pretty well.


Which I didn't deny, prefacing my comment by saying that some of the items I was about to list could be done in Android, but aren't that way out of the box.

Customizability is great, and I think a phone OS should be highly customizable. But I also want it delivered to me in a state where I have to do as little customization as possible. I don't want to fool with it more than I have to.

Can you comment further on google maps doing that automatically? I have not seen it to be the case that I can turn off data, input a destination, and get turn-by-turn directions the whole way.

I am fully aware of Google Drive, but the reason I started using OneDrive is because it is already installed on all my Windows computers; all I had to do was sign in. The same is not true of Google Drive. I take a picture with my WP and by the time I walk over to the computer it's already there, and I didn't have to do anything to set it up.

Also, regarding permissions, no, it was "added" early on, but you couldn't use it without downloading other apps and then without rooting your phone. I think it's supposed to be in Marshmellow for-real-this-time, but I haven't seen it yet.


It feels to me that, with the kernel emulation in place, a third party can now step in to provide reasonable Android emulation on top of that. I mean, unless Android uses kernels with some custom syscalls, it should be usable, right? Now it's a question of recreating the userspace on top of that. I would imagine that many pieces would just fall in naturally, while others (most things to do with UI) would probably require rewriting to properly integrate with the Windows desktop... but it's still doable.


But do the potential users on the PC really need anything more than

https://en.wikipedia.org/wiki/BlueStacks

or

http://www.androidauthority.com/best-android-emulators-for-p...


They probably don't, but I suspect that using WSL would significantly improve performance. In fact, it's probably one of the existing vendors of products like BlueStacks that will eventually do so.


Honestly I use the app-gap as a feature. Not having the fluff is great because snapchat is making people illiterate. We went from real words to abbreviations to no words and minimal attention span.

On the other hand now that Uber and bank of america have a universal app it feels more complete.


Missing "fluff" is not the problem. The problem is this: When the next killer app comes out, whatever it is, you may want to use it -- but it is very unlikely to be available on Windows Phone.


However, want and need are very different. I may desire an app, but not having it can make me better off. For instance, look here at the selfie deaths:

https://en.wikipedia.org/wiki/List_of_selfie-related_injurie...

This may seem ridiculous but that's what these apps are doing to people. :)


Interesting attitude. People kill themselves with toasters too. Would that dissuade you from using one?


Are you really trying to defend less choice?


When life gives you lemons, make lemonade!


More like, when life gives you lemons, criticize the intelligence of people who have apples.


Why the downvote? This is a legit and conscious comment


Some person doesn't like Snapchat. Their dislike of Snapchat and inability to install it on their phone due to choice of platform has absolutely nothing to do with, and absolutely no influence on what phones other people buy or what apps they install on it.

I have an Android phone and it doesn't have Snapchat on it. I also have friends with varieties of phones, and Snapchat, and they're very intelligent. It's donwvoted because it's a pointless "get off my lawn comment" that says nothing other than "I look down on all Snapchat users".


It's unlikely that the absence of snapchat on Windows phone will improve literacy.


Snap chat on android is also horrible.


Besides, Astoria will lead to a second-rate UX for Windows Phone (WP) as Android apps on WP are coded as per Android UX guidelines, not WP UX guidelines.

If Microsoft is building a framework, I'd expect it to provide a great UX for their users.

If a compatibility layer is required, it would be better for MS users to have it in the reverse direction, to let WP apps run on Android, and use that to try to convince devs to build WP apps.

In summary, it never made sense for MS to provide WP users (their own platform!) with a second-rate UX.


I don't think average user cares as much about uniform UX experience as they would from just being able to use the app. Example: Pokemon GO uses totally it's own UX on iOS and Android and that does not seem to harm it's popularity.

I would claim 'nice ux' is something that would not drive market adoption nearly as much as other factors (value, usability in general, etc).


I haven't played Pokemon Go, but it's fine for apps to use their own custom theme. What's not fine is to use one OS's theme on another OS, like an Android app on WP.

I see where you're coming from when you say that nice UX is not the most important thing, but a poor UX comes in the way of usability. The conventions users are used to no longer work, which comes in the way of using the app for its intended purpose. UI that doesn't fit might also cause users to pause and reorient themselves, again distracting them from their goal.

For example, wall switches in India are on when pressed downward, as opposed to upward in the US. Either works, but if the switches are different from room to room in your house, it's confusing.


Isn't that basically happened to IBM when they emulated Windows in OS/2? Devs didn't bother to build anything for OS/2.


That wasn't the only reason that OS/2 had been killed. I have used OS/2 2.11 and OS/2 Warp and OS/2 was truly a better Windows than Windows.

There were many other factors that killed OS/2. Just before Warp was released, it was quite hyped in the press. The then unreleased Windows 95 was considered to be a train wreck and OS/2 a true 32-bit operating system. However, once people got their hands on Warp, it turned out that the installation was difficult unless you had hardware that was covered by the relatively small driver base. Moreover IBM didn't really seem to care about supporting OS/2 for end users. So, much of the enthusiasm evaporated even before Windows 95 was released.


Companies focus on iOS and Android on mobile and Win32 and macOS, sometimes Linux, and many apps are nowadays web apps. Look at the Windows store, for years it is a wasteland with just a few good apps and lots of junk. WinRT/UniversalRT has too many downsides and no major benefits compared to Win32. Even PC gamers prefer Steam with normal Win32 games and are better of with sticking to it. WinStore is a trainwreck. Their WinPhone10 already failed spectacularly with decreasing smartphone OS market share of around 1% world wide. I can imagine that Microsoft might improve the Win32 touch support (a UI theme with larger buttons) and add Win32 support to their mobile next Win mobile OS. Running (legacy) Win32 apps would certainly on mobile would be huge for enterprise customers. Attaching a phone to a monitor and place a Bluetooth keyboard in front and you would have a full PC that you can carry around in your pocket. Whereas their current continuum desktop is just a bad joke and doesn't fulfil the vision at all (just a desktop background where only Office works no other third party PC applications)!


>Indeed, and the rumours are that Project Astoria worked too well which scared Microsoft. If Android apps "just worked" on Windows then why would developers bother using their new Universal Windows Platform.

This is categorically false. You only need to visit the windows phone reddit to see all of the complaints on how poorly implemented it was, how it affected the performance of the phone and the app compatibility issues.

Microsoft's attempt to put Android apps on windows phone and their harebrained idea to replace Google services with their services, within the app, were all failures and not because it would have cannibalized their native apps, but rather they just couldn't pull it off.


One of the videos I watched on their blog they mentioned this ability has been here since NT. They said something like "we just brushed the dust off of it and it worked".

https://blogs.msdn.microsoft.com/wsl/


I think it was called Project Astoria.


Microsoft has actually been doing Unix related stuff from way back - a fact that might not be known to some.

Xenix, then SFU (Services For Unix - for interop - on Win NT or 2K), plus they had (part of?) a POSIX subsystem around then or earlier- I used it a bit for C utility dev work on WinNT), etc.


Of course Microsoft then licensed XENIX to the Santa Cruz Operation which after several years renamed it to SCO Unix. SCO eventually split apart, but the piece that ended up with SCO Unix renamed themselves the SCO Group. The SCO Group with (possibly indirect) financial backing from Microsoft, attempted to use XENIX in a number of copyright infringement lawsuits to attack Linux vendors.


Hmm, there are a variety of inaccuracies in your short statement. Please carefully read the below so that you can be protected from propagating these errors in the future.

· SCO Unix, although it incorporated some Xenix code, was very different from Xenix; it derived a lot from SVR4.

· SCO basically went out of business, because their value proposition was "Unix, but on a regular PC so you don't have to buy an expensive RISC workstation." This stopped being a useful value proposition around 1997 because they couldn't keep up with Linux. So they sold the SCO Unix product line to Caldera, a Linux distributor, in 2001. I'm not 100% sure but I don't think any employees moved from SCO in Santa Cruz, California, to Caldera, which was in Utah. Maybe someone stuck around to smooth the transition?

· Microsoft's backing of SCO was direct; they "bought a license" in 2003. Maybe they also did some indirect backing that I don't know about.

· Caldera basically failed in the Linux distribution space in 2002, and the investors booted out CEO Ransom Love and replaced him with Darl McBride, formerly of Novell, IKON, a couple of startups, and Franklin Covey (!).

· The copyright infringement lawsuits weren't based on Xenix. Nobody claimed Linux had copied from Xenix. Rather, they were based on Bell Labs Unix, from the 1970s, and AT&T Unix System Labs System V Unix. SCO had supposedly acquired the copyrights to these sometime in the 1990s, and in fact them giving permission is how the Lions book was legally republished in 1996 or 1997, but it turned out that in fact what they had acquired was not the copyright ownership, but a sublicenseable license to the copyright. The actual copyright rested with some company Novell had bought up at some point, so the lawsuit got thrown out of court.

· The SCO Group sued not only Linux vendors, notably IBM, but also Linux users, notably AutoZone and DaimlerChrysler. This is an error of omission, but it's crucially important.


I appreciate your critique, but stand by my original statement. Let me address the items you bring up point by point. Please excuse my brevity.

1. Xenix was based on Bell Labs V7 and then AT&T SVR2, SCO Unix was based on SVR4. Xenix and SCO Unix shared code in addition to that derived from their common ancestry. Claiming that they were "very different" is obviously a matter of opinion. In my opinion, they were pretty similar.

2. SCO split into two pieces: one was sold to Caldera, the other became Tarantella. As far as your hypotheses about why SCO failed, I'm not seeing why it is relevant to the discussion.

3. Microsoft indirectly backed SCO Group through BayStar capital. Please verify this for yourself with a quick web search so you will be protected from propagating your misinformation in the future.

4. Not relevant.

5. Both XENIX and SCO Unix were derived from the same original codebase. SCO Unix would not exist if it were not for XENIX.

6. Actually, I think the AutoZone and DaimlerChrysler lawsuits are fairly unimportant within the context of the original discussion. To me, the interesting point is that MS had the foresight in 1979 to see that Unix would become a big thing and as a result created Xenix. Decades later, when Unix (Linux) was indeed a big thing, by proxy, it attempted to use XENIX to fight back at Unix (Linux) proponents.


Regarding #1 that is mashing together two separate products into 1. (I worked for SCO.)

The history is that Microsoft were producing Xenix - a port from AT&T. Eventually they decided to stop doing that work themselves as DOS, OS/2, Lan Manager and similar were of interest. (Note though that they ran their email infrastructure on Xenix - it was part of their operational business.) Xenix was handed over to a father and son company in Santa Cruz. They called the company Santa Cruz Operation so that in phone calls to Microsoft, the Microsoft folks would think SCO was a branch office not a different company.

Xenix was updated, ported etc, eventually being called SCO OpenServer. That "SCO Unix" did not have SVR4 in it. Heck it could barely do multi-processor and similar. In 1995 Novell (owners of the AT&T Unix at that point) then "handed" SVR4 over to SCO, with the result being called SCO UnixWare. That was the SVR4 derivative. SCO did this to get into the enterprise space, and couldn't do the engineering to bring the Xenix derivative there.

Later in the 90s there was a game of musical chairs as Intel announced the Itanium, and all the Risc chip and Unix vendors formed consortiums. SCO was part of one with IBM named Project Monterey. It was supposed to have some Linux compatibility, but as time progressed IBM cared more about Linux (and AIX) while SCO couldn't keep up with the engineering commitments.

In 2000 all the Unix stuff (OpenServer and UnixWare) went to Caldera. Well it would have but the deal was so complicated (it was pseudo licensing with money and royalties going back and forth). Heck Caldera only really wanted OpenServer. The SEC went "huh" a while later, so a new deal happened in 2001. There was also a tech downturn, so a simpler deal was done. All that was left of SCO was Tarantella hence the company rename.

Caldera renamed themselves to The SCO Group a while later. The grounds for suing IBM were around Project Monterey, although as far as I know neither side did all they should have. And then since IBM had gone in on Linux, SCO Group decided to sue claiming IBM had put Unix code into Linux. Novell got involved because the details of "handing" over mattered.


Thank you for clarifying!

Were you around when stuff moved over to Caldera? Were there SCO people who moved to Utah, or who became Caldera employees in California?

My condolences on having had such a shitty thing happen to such a beautiful company. I don't ever remember SCO Unix being a particularly great Unix, but it was pretty solid, and the company was awesome; it's a place I would have been proud to work if I'd had a chance.


I was around during the transition but I'd always been on the Tarantella side (I was an architect of the product) and so didn't work on the Unix side directly. Heck our division customers were overwhelmingly running Solaris with Linux picking up in the 2000's.

The Unix side of the business was rapidly shrinking. 1999 was a banner year because everyone had to go out and upgrade their operating systems so they could claim y2k compliance. 2000 was the end of a tech bubble, and also saw a tech downturn. This lead to drastically reduced sales. Throw in Linux getting increased adoption (remember that IBM promised to spend $1 billion on it which gave a lot of credibility), and that SCO's UNIX products no longer had particularly relevant sweet spots in the market.

I don't know of anyone who moved to Utah. The people in California became part of Caldera but I don't know exactly how that was legally structured. Also SCO had folks all over the US and world. At the peak it was ~1,100 employees and $250m annual revenue.

SCO was a good company and many people liked working at the company because they liked many of their colleagues. Employee turnover was quite low because of that. It also had bad points, but what doesn't?

SCO Unix (OpenServer specifically) was great but not from a technological viewpoint. However the vast majority of users were not techies - they were dentists, receptionists, pet cemetery workers etc. OpenServer came by default with a gui that let those regular folk get things done in a friendly way. See my sibling comment about why Caldera wanted OpenServer.

And SCO did have some firsts. It was also very good at snatching defeat from the jaws of victory. It was the first company to offer Internet in a Box. You installed the system, and now were on the Internet (as a server). We were the first to ship a browser (licensed Mosaic). We shipped by far the most copies of Netscape. At one point Pizza Hut started allowing orders over the Internet. It was more of a proof of concept rather than massively used and widespread. But SCO was behind that too.


Heck Caldera only really wanted OpenServer.

Why did Caldera only want the more dated OpenServer? Did they plan to provide consulting services to existing OpenServer users?


The reason why SCO was successful was because of how it is used. Imagine it is 1991 and you have a dentist's office you want to computerise. You go to a local VAR (value added reseller) and they would come in and install a complete solution. It would consist of a Compaq "server" ($5k), several terminals, com port cards for the server, a tape backup system, and some dental office management software. Oh and a $1,000 copy of SCO Unix. The VARs typically marked up what they sold by 15%, which is how they got paid. Two related bits of trivia - at one point the only software that existed for managing pet cemeteries ran on SCO OpenServer. And most McDonalds (I believe US only) had OpenServer in each branch.

By the late 90s SCO had 15,000 of these VARs. Caldera wanted to essentially substitute Linux for OpenServer into that setup, and make $1,000 per copy of the OS rather than $25. The VARs realised they could supply Linux themselves which is why Caldera had no traction doing that, and doubled down on SCO OpenServer and the existing installed base. Hence the company rename too - it was all SCO products.

UnixWare was touted as Enterprise ready. It did multi-processor well, had sophisticated filesystems (Veritas), could do clustering (Non-Stop) etc. But at the time most who wanted "big" Unix went to one of the RISC vendors each of whom had their own Unix. If they wanted to go Intel (which wasn't credible until the Pentium Pro) then the competition was Windows NT.


> MS had the foresight in 1979 to see that Unix would become a big thing

1978, I think. The idea that Unix would take over from CP/M seemed pretty common at the time. The main problems were the high Unix prices and the cost of the hardware required.

However, Microsoft got lucky with DOS on the IBM PC, and the PC market took off, and IBM also licensed Xenix....

Unfortunately, IBM decided that it needed to own the whole stack. This precluded using Unix/Xenix/etc as the PC client, which was a problem because Microsoft was now dependent on IBM. (Ballmer called it "riding the bear", followed by BOGU, for Bend Over, Grease Up.)

IBM finally published its strategy in 1987 as Systems Application Architecture (SAA), which mandated the use of the extended edition of OS/2 (not available from Microsoft) as the PC client. SAA also included IBM's PS/2 micros with MCA expansion buses, intended to break the link with the DOS-based PC industry.

Faced with possible exclusion from the IBM-controlled corporate market (1), the best deal Microsoft could get in 1985 was to co-develop OS/2, so Xenix -- despite having been by far the most popular Unix of its day -- became surplus to requirements.

Another factor was the arrival of AT&T's System V in 1983. This showed AT&T was serious about selling Unix, and maybe Microsoft didn't think it could compete. What it did was contribute small parts of Xenix to SVR4 (1988).

SVR4 was where AT&T & Sun decided to redefine and take over the IT industry, which led to the great Unix Wars and the creation of OSF etc. All of which infighting left the door open for Windows, DR-GEM, DesQview and many others....

Fun days....

(1) SAA flopped and IBM had to go back to making PC compatibles, so it turned out that IBM didn't control the corporate market as much as it thought.


Thank you for your point of view!

I'd forgotten about the BayStar thing, but its existence seems to be debatable; someone at BayStar said they got a promise from someone at Microsoft to guarantee their SCO investment, but didn't actually fulfill the terms of the guarantee, and Microsoft denies any such guarantee ever existed. So it ends up being a he-said-he-said thing.


I interviewed at SCO in Santa Cruz, turned it down (was in a phase of life where I required a higher salary - too much dumb debt). The biggest thing they were concerned with was my ability to apply W. Edwards Deming's Statistical Analysis/Planning methods. Just from very short interviews it had the feel of being a "too suddenly large" company trying to get a handle on quality control.

FWIW.


True. I read about parts of that history on Groklaw as those later events unfolded. For a while there, IIRC, based on what I was reading then, it looked like they might have nearly succeeded. Good that it didn't happen. IIRC, parts of the industry rose up together against it, also individuals. Don't know the full details.


Refs.:

https://en.m.wikipedia.org/wiki/Groklaw

https://en.m.wikipedia.org/wiki/SCO/Linux_controversies

https://en.m.wikipedia.org/wiki/Pamela_Jones

" In 2010 the Electronic Frontier Foundation awarded the Pioneer award to "Pamela Jones and the Groklaw Website" for "Legal Blogging".[3] "


They didn't build SFU - the first thing called that was a rebranded MKS toolkit, from Mortice Kern Systems (later, I think, bought by somebody else). The second thing called named SFU was a different POSIX layer built by some company I can't remember the name of.

Remembering having to use it (the MKS version) to port some Solaris software, I recall my team mates having an alternate expansion for the acronym that isn't that hard to guess.


The other thing was called Interix before it was bought by Microsoft and became SFU 3.0.


Don't know if they built SFU internally from the start, or bought/licensed it and then built upon it. But it was MS engineers in the MS Hyderabad office who told me about SFU and what it was for.

I remember Mortice Kern's MKS Toolkit from BYTE and PC Magazine magazine ads, but did not know of the MS connection you mention.


Xenix-UNIX was way under powered on 16 bit x86s. By the time 32 bit CPUs came along, there were better ports of UNIX like Linux and BSD.


That's completely ahistorical. People were running xenix on 32-bit CPUs well before they were running linux or bsd/386.


True, I used it. SCO Unix was a little better in speed, IIRC. But the hardware I used it on may been better by then, too.


Honest question: What's the deal with this stuff? I'm not interested in the reasoning that it's done just because they can do it. I mean... Why not just ... use Linux? I suspect it's a comfort zone thing, and most people really don't want to move out of their comfort zone (and especially programmers have a hard time admitting this). I understand people developing for the platform, but I don't understand people developing for Linux on Windows. I hear some people talk about Visual Studio being a great IDE, but I know tons of people that don't use VS. They use ST3 on Windows to develop for Linux, and they're happy about this stuff, and I just don't understand why people wouldn't just migrate to Linux in the first place.


For me, the Adobe suite is one thing Windows has over Linux. That and easier hardware support. I'm currently running Ubuntu on my desktop and about 1 in every 3 times I suspend the machine, it needs to do a full reboot. Also every time I do a software update, I need to reconfigure my video card.

Currently, I have to switch over to my MacBook Pro to do any work that requires Adobe software, and then back to my Ubuntu machine for development. So if this Windows 10 thing works out, I might consider switching over (although last I used Windows I wasn't a fan of juggling three different shells--four if you count git bash: PowerShell, cmd.exe, and Cygwin).

Otherwise, I might do what I should've done in the first place and ultimately get a new Mac that can support 4k properly--the one I have is the generation right before 60Hz 4k support, and the 30Hz my current one is at is surprisingly annoying to work with.


> What's the deal with this stuff? I'm not interested in the reasoning that it's done just because they can do it. I mean... Why not just ... use Linux?

For a desktop/laptop, particularly in corporate/enterprise environments (but, also, often for solo devs where its also a personal PC) there are lots of reasons you might want to have Windows outside of the actual dev-specific parts of your work. This is an alternative to second computer / dual boot / VM-based solutions to having your Windows and Linuxing it too.


I absolutely need full Excel&Word compatibility for collaborating on documents with others so Linux is not an option. Neither LibreOffice or MS Office Online provide appropriate compatibility, editing a complex document made by someone else and sending back to them will not preserve its formatting.

This means either MacOS with the associated hardware lock-in or Windows with all the associated problems in installing/compiling dev/research tools that just work on Linux or Mac.


> MacOS with the associated hardware lock-in

Macs will run both Windows and (with some EFI hacking) Linux. It's true that you need a Mac to (legally) run macOS, but that's lock-out, not lock-in. Essentially it is a $1000 license to the OS and perpetual updates.


I meant that if I my standard workflows and software are on MacOS, then I'm locked in to a single hardware provider and their offering.

For some cases this is a limitation. Their laptops are quite nice and the imacs are okay, but if I want to have, say, a powerful desktop workstation with a bunch of nvidia GPUs for CUDA then MacPro doesn't really cut it, and I'll have to run Linux on it; and if I need a extra secondary/tertiary computer/laptop that doesn't really need to be good, then I have to choose between it running a different UI than the main computers or paying a rather hefty premium because of the lockin.



Unfortunately Windows has much better hardware support (e.g. Skylake laptops, multi monitors via displayport MST).


Does it, though? I use four monitors with Linux Mint at work while my laptop is docked. The 'Optimus' dual-GPU thing works out of the box. But I have a Windows 10 gaming rig at home for Star Citizen, and when I installed Windows, it didn't recognize the onboard NIC or the AMD 9790 GPU with default drivers. And as it just so happens, late last night, this same Windows 10 box mysteriously lost network functionality, and I had to reinstall network protocol drivers (!). I hear "hardware support" a lot, but I haven't had problems with hardware support on Linux for a few years at least, and every time I try to use Windows, I have hardware problems. The Windows users on our dev team seem to have problems regularly. You're right that it used to be a thing. But I just don't buy it anymore.


> Does it, though?

Unless you live in a very special bubble, yes.


My desktop, my brother's laptop, and a printer work well with Linux, perhaps because they aren't high-end.

On the other hand my mother's friend had a mysterious driver problem (screen glitching) with his NVidia card after upgrading to Windows 10.


>> perhaps because they aren't high-end

Or because they are not new. I had a bad experience after i bought, in January 2016, Intel NUC with Skylake processor and Iris 530 graphics card.

I had few months of struggle, like:

- problem with installation (installer won't boot without some cryptic kernel parameters passed),

- lack of graphics driver

- random crashes (like Google Maps causing the whole system to hang, requiring hard reset)

- processor not running at full speed

- system seeing only one logical core instead of 4 (2 cores x HT)

- "shutdown" system button causing reboot instead of power off

Most of those were fixed only after Ubuntu 16.04 came out, at the end of April. Some issues however, persist.

So, my impression is that Linux is good choice only if your hardware is quite old (like, say, two years, or at least one processor / graphic card generation behind)

For people like me, who want latest and greatest hardware Linux is not an option.


Except when it forgets that you have said hardware installed.


> I mean... Why not just ... use Linux?

Linux as a subsystem of Windows gives me a much better story as far as hardware compatibility and software than the other way around.


For certain things, I agree. For me, it is easier to dual-boot with Win 8.1 as my primary. I tried running pure linux on my laptop, but then had cut myself off from distributing any game I might make for Windows.

I have struggled with MSYS2 several times on Windows, and have broke the installation. Once it hits critical mass, some dependency or setting ruins my whole experience.

But then again, I've been on the wrong side of these discussions before: I was running Minix on my Amiga 500, and I was rooting for Minix over Linux back in the day. I also ran MkLinux on my PowerPC Mac, and was working on my own OS as a variant of minix.

If you are a web dev, and don't do .NET F#/C# then sure, Linux all the way, however, I am curious to see how .NET Core plays out.


If that's what you have to tell yourself, I guess.


Windows only tools, corporate policies which mandate windows. Tons of reasons


Sometimes you're targetting Linux, but your main target is Windows. Other times, you're just more comfortable primarily using software that's available on Windows, and just need a few tools from Linux every once in a while. MS has always given collections of very powerful developer tools, and this seems something to add to that collection.


Consider people who are developing cross-platform apps that target Windows _and_ Linux. With this thing, you can develop on Windows, immediately build for both (and so e.g. verify that you're not using any VC++-isms in your code), and test both right away. In fact, VS can even debug via gdb these days.


If it's at home, maybe for games? It sounds like if you're going from a Mac to Windows 10, you get a better Unix (closer to Linux) and more games. I might actually take them up on that for a home desktop box, if I can figure out what hardware to buy.


> people talk about Visual Studio being a great IDE

I think this is just something people regurgitate often and believe without evidence. I've used the IDE and it is terrible – probably one of the worst. Last I checked it couldn't open multiple windows for certain file types. Its interface looks like a web IDE. It hangs while it scans your project to provide IntelliSense, which IntellieSense itself is crazy annoying. Who wants stuff popping up while you type or stuff to get underlined while you haven't yet finished. I actually have a big list of annoyances somewhere around.

I really can't agree more – just use Linux, or some variant of UNIX, or use macOS if you don't want to fuss with buggy/missing drivers and piecing together your own computer from parts like it is some kind of difficult feat or accomplishment worthy of nerd respect.


It used to be good. Then WPF happened.


Yes, I also think VC6 was the best IDE. But WPF happened in VS2003, so most people who describe VS as the best IDE ever actually refer to a WPF version.



I'd love to see a video of the demo session for this; really curious how ProcMon and AV tools will deal with ELF binaries in this release.


Screenshot: http://imgur.com/a/5Fcr9

Short summary:

* Ubuntu, and all children apps run under current user's credentials, but are launched under an Lxssmanager servicehost process (as opposed to explorer.exe)

* ProcMon can interact (send sigterm / etc to) unix processes; but as shown on htop, unix processes are not aware of windows ones; nor have I found any ways to send signals backwards

* API compatibility is excellent. Last 2 days, I have re-compiled a full python & elasticsearch & mysql stack into this. Overhead is significantly lower, than any of the virtualization stacks

* I don't use AV tools; however, I suspect they don't check for ELF files. Host system is mounted in /mnt/c , /mnt/d ,etc in LXSS, with current-windows-user creds

* 2-command Sandbox reset: lxrun /uninstall & lxrun /install <- will restore the linux subsystem to factory default

* LXSS root dir (/usr /var, etc) is in c:\Users\$username$\AppData\Local\lxss\rootfs\ ; home is mounted into c:\Users\$username$\AppData\Local\lxss\home\$username$\

And some caveats:

* Filewatching (specifically, inotify_add_watch ) does not (yet) work

* Manipulating files from the host occasionally makes it _disappear_ (??!) from visibility in the linux subsystem. Specifically, git pull from bash, then git update from host makes git update from bash impossible (index file open failed). Same problems with other types of host file-manipulation. This might be due to permissions, or somethings I haven't figured out yet.


Regarding file watching. Some programs support polling which although less efficient usually is completely adequate for development use cases.

Today the status on this issue changed from 'No status' to 'On the backlog' so it will be fixed someday. https://wpdev.uservoice.com/forums/266908-command-prompt-con...


And there is some strange behaviour with the keys. Actually using LXSS become far lees powerfull to the users of Conemu thanks to this. Vim and NeoVim does weird things with the arrows, on mc I can't navigate on the filesytem with the cursors, and htop does weird things.


I suspect that might be conemu, and compatibility thereof. Specifically, running the bog-standard bash prompt (with conhost.exe) makes this a no-repro; all keys / arrows works in MC & htop


Unfortunately, it appears to be a design problem on the Microsoft side: https://github.com/Microsoft/BashOnWindows/issues/111#issuec...


you can get ubuntu bash into ConEmu with just:

-new_console bash ~


Or, with current stable (160724), install "Bash on Ubuntu On Windows" (ugh), then in ConEmu, open the Tasks section in Settings and click "Add default tasks...", then Yes.

You may want to add the tilde to the path, as it seems to open to /mnt/c/Users/<username>/. Also, I think it still messes with the keys, but not always. They seem to work fine right after starting it.


> unix processes are not aware of windows ones; nor have I found any ways to send signals backwards

I haven't tried it out, but this project claims to offer support for doing exactly that:

https://github.com/xilun/cbwin/


| Note that outbash.exe listens on 127.0.0.1, but validates that processes that establish a connection are running as the same user as outbash.exe (this check is implemented starting with version v0.7). Therefore, it can be used on multi-user computers.

(from: https://github.com/xilun/cbwin/ )

So, I suspect this uses TCP & networking to do that; which does work: you can listen to a port in LXSS, and that will be accessible via 127.0.0.1 ;and you can connect to 127.0.0.1 from LXSS, and it will be routed to eg. windows listeners . Above was specifically about process-based signals, and non-network-hacks.


I confirm it only uses TCP for all communications between Win32 and WSL; source: I wrote cbwin. For now there is no other documented API, so this is the only sane choice (actually you can also use temp files on the FS, but it would be a poor idea for several reasons)

Using TCP is actually not that much different than opening a Linux or Windows device or IPC object and using it, except that: * you must handle serialization and framing, * you typically get no security for free (I get some back with a GetExtendedTcpTable + OpenProcessToken + AccessCheck hack), * performance is not excellent, esp with the security check, however on a modern computer you can still sustain launching at a mean rate of ~40 Win32 process / sec (and peak rate of hundreds per sec), which is largely enough for any use case I expect. (Actually if you do spawn Win32 process too fast for too long, Windows tends to graphically bug even after you stop that activity. It might be because of my graphic card driver, because I did not had that issue in a VM.)

On the plus side of using TCP, I could easily test some of the WSL side code on a real Linux to track some bugs (temporarily allowing non localhost connections on my dev environment)

To be clear, cbwin is not and will never be a complete substitute to proper IPC / interactions between Win32 and WSL, but that's not too much of an issue because MS will very probably add some in a future release -- I think at least some way to launch Win32 programs from WSL, and at least working pipes between processes of both world (not 100% sure, but I would be surprised if they don't).


Will they even try? WSL isn't installed by default, you have to specifically ask for it, and presumably only a small percentage of Windows 10 installs are developers who'll be doing that. I don't know if malware writers will bother with it.


You're right. You currently need to sign up for the insider program, enable it, enable ahead of time updates, and install the feature.


It's part of the AU release, and so you don't have to sign up for the Insider Program anymore.


I can't help but think this is all towards an end which is Docker containers running on Azure's core infrastructure. Having the tooling show up earlier in Windows 10 as it's being flushed out is just a really nice bonus.

I've been mostly using bash in windows (installed with git tooling) for a couple years now.


That is not Bash on Windows, it's a totally different thing.


I know it's not bash for windows... I was just making a side comment that I already have been using bash...


So far, this has been fantastic. Our team has been seriously considering converting from Windows to Linux for embedded development. (we run Linux VM when needed) Up and running and cross compiling for ARM without issue. Really surprised how it has just worked so far. This came at a great time for us.


Heh, if you were just about to switch to Linux, then it sounds like it came at a great time for Microsoft too ;)


So far it's been very bad for us. The always-on telemetry in Win10 draws a big question mark if we can depend on Windows in future. While Windows 7 is great and around until at least 2020, an expensive move to Linux, FreeBSD or Android seems like the better option.


If you are trying to escape telemetry Linux or FreeBSD make sense, but not Android. Between the Android OS and the apps, Android collects much more data on you than Windows 10.


Just a suggestion: Consider PC-BSD alongside FreeBSD in your evaluations.


> always-on telemetry in Win10

When you install Windows 10 there are several pages of checkboxes where you can disable this telemetry. Of course we can never be sure that it is actually disabled.


That doesn't seem related to the WSL at all.


Obligatory: Maybe we finally have the year of Linux on the Desktop?


Direct link to the slides for the impatient>

https://rawgit.com/ionescu007/lxss/master/The%20Linux%20kern...



Anyone know if it's possible to somehow port the linux subsystem to windows 7 and 8?


No. The Linux subsystem is dependent on new kernel features introduced in Windows 10 (pico processes, which were introduced with 8.1 and improved in 10, improved fork support, file system case sensitivity improvements, memory management changes, etc.).


How has the fork support improved?


Not likely, with Windows 8 pro, the latest docker for windows will probably be the closest thing to transparent integration. I tend to run a full Ubuntu Server VM, via SSH, in addition to local samba shares... I edit in a gui in windows, and run everything under linux. YMMV, but it's worked well enough for me.

Now that docker has better integration, I may well start using that more often.


If you look at the slides, it requires considerable kernel support. So that's very much a no.


I am kinda hesitant to download and open a PDF file coming out of Black Hat 2016.

EDIT: See child comment, GitHub preview is fantastic! Slides have a ton of great info.


GitHub will actually render a full preview on the website, no download required.

https://github.com/ionescu007/lxss/blob/master/The%20Linux%2...


Github uses pdf.js for that, so it's not more or less secure than viewing it with Firefox's built-in viewer.


There have been serious security issues with the integration of pdf.js into Firefox in the past, so it may actually be more secure than Firefox's built-in PDF viewer.


Ah, that's quite pleasant. And it doesn't even appear to use the browser's PDF renderer or anything.


Guys, how do you think Black Hat 2017 is going to be funded if you dont download and open the PDF ? ;)


If they're not able to make it silently download and execute remotely without me interacting or being aware, I feel they're not doing their jobs as black hats.


The people who know how to do that are pulling down the big bucks on the Windows 10 team at Microsoft.


Well, it uses Firefox' PDF renderer - they just copied the code into the page itself.



I try to use "copy" in an inflationary way, to remove the negative stigma associated with it.

Our entire floss community is based on forking an existing project, improving, and sometimes merging again, and the entire history of innovation has been based on this copy-transform-combine cycle, too.


It says "This file is too big to show. Sorry!"

Edit: Nevermind, switch to desktop version and it works.


It's not "coming out of Black Hat". It's coming from Alex Ionescu, via his GitHub account. You have an identifiable person with a known reputation here, not some unidentifiable person hiding behind a pseudonym.

The problems that you have are quite different ones:

* Demonstrating that the Alex Ionescu of http://www.alex-ionescu.com/ and of https://microsoftpressstore.com/authors/bio.aspx?a=07cda0ad-... is the Alex Ionescu of https://twitter.com/aionescu/status/710477975288827904 and of https://github.com/ionescu007 . It's difficult to show a connection in that direction. The Alex Ionescu of https://alexionescu.net/#contact lets people connect the dots — a different set of dots, mind you.

* Knowing that https://github.com/ is the real GitHub. If this is a problem for you, then you have more serious and urgent problems than viewing a PDF document. (-:


You shouldn't be. The speakers are paid, the conference concentrates on professionals, and it would be extremely bad form for a presenter to hack participants. The people who run these conferences would take such a threat extremely seriously and you can bet law enforcement would be notified and very well informed.

To the point that I would trust a PDF from BlackHat _more_ than I would trust one from any other scientific or professional conference.


is there any more information on this custom init they are having?


How do we get linex



Excellent preso. Thank you.


I must complain the slides are chalk full of text with very few visuals. It becomes a paper reading instead of a talk at this point.


I wonder how much further humankind would have progressed by now if "because it can be done" were not such a big motivation for intelligent people.

Edit: Since everybody took this post seriously, I will, too: We would be closer to some local maxima, but would have no chance to progress beyond them.


I would wager humankind would have progressed quite a bit less personally.


The biggest motivation for me is when people tell me that it can't be done.


Not a motivator for me, but not a deterrent. Some of my best accomplishments were regarded as less than possible by the majority of other operators.

Sometimes the intersection of "because it can be done" & "the most that can be done with the resources within reach" can be applied in the most problem-solving of ways.

Haven't tried the Linux in W10 yet, but at least have gotten DOS to boot to bare metal on a modern PC from a GPT-structured HDD reliably with less monkey business than ever.


From fire and agriculture to airplanes and the moon landing, a huge portion of human development and the entirety of fundamental science wouldn't have happened at all if they demanded some preconceived idea of immediate utility. So, I like to reserve this thing for rare occasions, but straight up fuck you for trying to force your own ideas about usefulness to how other people spend their productive time.


The only way it could possibly result in progressing more would be if perfect foresight were a thing. A lot of what we take for granted today was found/invented/etc. while looking at something else and finding an unexpected result.


They're doing this for entertainment purposes. You could burn out if you're not having fun while working. Perhaps one would just do dumber things, not ones that are more useful?


I wonder how much further humankind would have progressed by now if we knew upfront what's useful and didn't have to try a whole bunch of crazy stuff.


Agreed. and so for Windows I've had to deal with the pain of Cyngin and Putty and its bretheren which adds to the list of me trying crazy stuff when certain commands wont work. Then I remember "doh its Windows"




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: