As someone who worked in the systems division throughout the 90s, I've often found Raymond's history a bit odd.
"Windows 95" was originally "Windows 93" and explicitly wasn't going to have a new user interface. They wanted to maximize revenue and not force people to re-train. It slipped and slipped and slipped...
The UI was actually prototyped and built not on Windows NT, but Cairo -- the thing that was supposed to come after Windows NT 3.1. It also slipped and slipped and slipped...
Paul Maritz (now of VMware) thought it was a great idea to have lots of teams, under competing vice presidents, build overlapping systems technology that needed to remain compatible at the legendary Windows level.
So. Stupid.
It basically took until Windows 7 to get the mess he created straightened out.
The key shuffle had the Cairo UI developers -- but not the designers -- move from under VP Jim Allchin to VP Brad Silverberg's group. Design was managed by Joe Belfiore, a very junior guy with no design or UI design experience. Nobody wanted the job (interface was not respected then) and he manned up and took it. He's now VP over Windows Phone.
But this engineering team was sort of -- uh -- let's just say Allchin didn't really fight to keep them. The types of guys who never quite realize width is (x2 - x1 PLUS 1) resulting in everything from blurry icons to blurry images/faxes for a decade. Who then refuse to fix the bugs because they "don't see it and it would add risk to our ship date."
After Windows 95, these devs went on to build the first IE. Yet again in a mad "must ship" panic, again led by people with no visual or design experience, again under Silverberg. So that's where all the box model madness and "things just don't line up" nonsense started.
>They wanted to maximize revenue and not force people to re-train. It slipped and slipped and slipped...
And don't forget the 80386 was released in 1985. In fact MS had the early OS/2 2.0 SDK betas in around 1990 which continued until MS broke the JDA in mid-1991. And while Chicago was slipping, look up "OS/2 Microsoft Munchkins" for some of the unethical tactics MS used to attack OS/2 after the JDA was broken. And I hasn't even mentioned DR-DOS yet.
In fact, MS screwed up the protected mode transition so badly I could write an entire article about this history, starting with when Intel announced the 80286 in year 1982 but MS ignored it and made the mistake of wasting years writing a real mode multitasking DOS. Then IBM refused to license it, which led to the JDA in 1985, which resulted in the 286 protected mode based OS/2 1.0 being released in 1988 which was what should have been finished by 1985 in the first place. Problem was that the 386 was introduced by Intel in 1985, and I could go on about Windows/386 and Windows 3.0 etc including the MS OS/2 2.0 SDK betas mentioned above...
Interesting how the developers of Windows 3.1 were forced to use relatively low end computers.
The hardware & bandwidth of your average HN reader is likely much better than your users' setups.
Does anyone here keep low end computers on hand for testing? With the proliferation of client-side Javascript processing, hardware is becoming more important. What about testing over dial-up or slow DSL?
Anyone know of an easy way to simulate a slow bandwidth connection? Perhaps tether your MacBook to your iPhone and turn off 3G, to test at Edge speed.
Lowest I go is testing on IE8 in a VM on my MacBook Pro. That's probably not good enough (or... bad enough, technically).
I'd like to see a discussion about this topic. Testing with low end hardware & bandwidth, etc.
I learned this lesson many years ago, back in the early/mid 90s. There was a well known American developer named Tran (Thomas Pytel) who wrote some mindblowing stuff back in those days...very proper hacky things in bizarre screen modes using more colors than should have been possible, and using sound hardware in crazy ways to produce more sound channels than anybody thought possible in those days, and all running faster than should have been possible on the hardware of those days. Incredible optimized stuff for the early 90s.
My friends and I would always try and by the best hardware available to run this stuff as quickly as possible. The first person with a 486dx2-66 pretty much ended up running their system on display mode during meetups on all the old demos.
What did Tran have? A 386sx16 if I recall. A machine that was so slow, so last gen, that he was probably unable to even watch most of the demoscene releases available at the time. No math co-processor, 16MHz, and yet he managed to write an amazing amount of highly influential and important code used far outside of the demoscene (most notably PMODE http://en.wikipedia.org/wiki/PMODE which was used in many game productions during that era).
I remember him crediting the fact that his home machine was so slow that it forced him to optimize the hell out of his code to make it run decently, and the side effect was that people with faster machines had their minds blown. I personally think that Tran, and his works, should be mandatory study for any developers, as an inspiration for what they can do with minimal hardware and the importance of optimization.
Assuming I'm in a location with reasonable signal, I tether my netbook to my mobile phone and use that to simulate the unholy trinity of relatively slow CPU, small screen, and slow high-latency connection.
The data rate I see varies wildly, but on average is somewhere close to first generation ADSL links over here (512kbit/sec) - I've seen 3mbit/sec on HSDPA but only on speedtest apps and even then only in ideal conditions (strong signal, silly hour of the morning so relatively little contention for the cell's resources, ...). If I want to test worst conditions than that I tell the phone to force itself to use GPRS/EDGE only (there doesn't seem to be a GPRS and GPRS only option other than breaking out the old spare phone that doesn't support EDGE).
Of course I can use the phone directly if I want to test against a much smaller screen and an even slower CPU.
One previous employer had a corporate policy that forbade having low-end computers for testing. The closest we could come was to test the desktop software in a virtual, and starve the virtual for memory. When we had to move to .NET 3.5 for some feature, there were about a dozen customers we had to refund because they were still using Win2k as late as 2009. This software was primarily used by accountants and actuaries, an audience well known for being technology laggards.
> Does anyone here keep low end computers on hand for testing?
My main work desktop is a 2005 vintage machine (Athlon64 3000+, 1GB RAM, Geforce MX4000) and it's still quite sufficient as a dev machine. Of course, I have a whole bunch of Opteron servers to run CPU-heavy stuff when necessary.
I always thought it was related to MSDOS compatibility.
If I recall correctly cli, sti, in, out, ... are privileged operations in WinNT, but not in Win95. Why? Because these instructions can't work in virtual-8086 mode and need to be handled by the OS, and that was really expensive for early 32bits machines. Win95 required legacy DOS programs to work with a reasonable performance, so the solution was to ignore the problem and allow the privileged operations... and I can't provide more details because I don't remember them :)
Yea, Intel VME, introduced in the Pentium and SL-enhanced 486 processors, fixed the problem by allowing the processor to maintain the virtual interrupt flag.
"(When Windows 95 shipped, my primary machine was a 486/DX50 with 8MB of RAM. My test machine was a 386 with 4MB of RAM. The combined computing power and storage capacity of all the machines in my office is now exceeded by your cell phone.)"
This makes me wonder why NT's minimum requirements were six times those of Windows 3.1's. Both 3.1 and NT 3.1 had essentially the same front-end, to my knowledge, so was NT's back-end just that vastly more inefficient than DOS?
Vastly more advanced. The NT kernel was designed by Dave Cutler (of VMS fame). When work on NT started, it was simply beyond what PCs could do: the design included every high-profile buzzword from the minicomputer world, as well as portability across CPU architectures and a hardware abstraction layer. (Originally NT was supposed to be OS/2 3.0.)
NT was one of Microsoft's most brilliant long-term investments. It delivered the true Unix competitor they needed in the '90s, and the kernel is still there in Windows 8 with no urgent need to replace it.
If you ever have a chance to read the book "Showstopper" by G. Pascal Zachary (and haven't already), do it. Awesome book about Cutler and the creation of Windows NT.
Raymond also glosses over this a bit in his article. Microsoft wasn't selling Windows 95. Microsoft was selling an upgrade product to corporations who ran Office.
There were few (if any) 12MB machines, it was mostly 8MB or 16MB. (If you had 4MB, Windows 95 would boot and you could play Minesweeper.)
The target market had 8MB systems. Which needed to run Windows 95 + Office 95 flawlessly. 4MB was a lie and a fake goal -- which made it all the way to the box -- that made sure an 8MB system could run Office.
User Interface and Plug and Play were the risk items. I personally thought giving the critical path guys shitty dev machines was shortsighted -- we had plenty of ways to make sure the overall system met performance goals.
NT wasn't that much bigger. It had the Win16 subsystem (WoW), so there were basically two copies of Windows running at all times. Still, NT fit comfortably in 8MB and ran every version of Office (old and new) well in 12MB.
TL;DR: arguments over 4MB and one cycle of upgrade revenue stalled Windows and IE development for 15+ years.
NT is far more advance and feature rich than Win31/Win95/DOS. Its internal is quite elegant and powerful. Its minor modified versions (XP, Vista, Win7, Win8) run till today and still going strong.
"Windows 95" was originally "Windows 93" and explicitly wasn't going to have a new user interface. They wanted to maximize revenue and not force people to re-train. It slipped and slipped and slipped...
The UI was actually prototyped and built not on Windows NT, but Cairo -- the thing that was supposed to come after Windows NT 3.1. It also slipped and slipped and slipped...
Paul Maritz (now of VMware) thought it was a great idea to have lots of teams, under competing vice presidents, build overlapping systems technology that needed to remain compatible at the legendary Windows level.
So. Stupid.
It basically took until Windows 7 to get the mess he created straightened out.
The key shuffle had the Cairo UI developers -- but not the designers -- move from under VP Jim Allchin to VP Brad Silverberg's group. Design was managed by Joe Belfiore, a very junior guy with no design or UI design experience. Nobody wanted the job (interface was not respected then) and he manned up and took it. He's now VP over Windows Phone.
But this engineering team was sort of -- uh -- let's just say Allchin didn't really fight to keep them. The types of guys who never quite realize width is (x2 - x1 PLUS 1) resulting in everything from blurry icons to blurry images/faxes for a decade. Who then refuse to fix the bugs because they "don't see it and it would add risk to our ship date."
After Windows 95, these devs went on to build the first IE. Yet again in a mad "must ship" panic, again led by people with no visual or design experience, again under Silverberg. So that's where all the box model madness and "things just don't line up" nonsense started.