Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I actually have a short book on the Itanic/Itanium done and planned to have it released as a free download by now. But various schedule-related stuff happened and it just hasn't happened yet.

I was a mostly hardware-focused industry analyst during Itanium's heyday so I find the topic really interesting. From a technical perspective, compilers (and dependency on them) certainly played a role but there were a bunch of other lessons too around market timing, partner strategies, fighting the last war, etc.



I worked on Merced post-silicon, and McKinley presilicon. I wasn't an architect on the project, I just worked on keeping the power grid alive and thermals under control. It reminded me of working on the 486: the team was small and engaged, even though HP was problematic for parts of it. Pentium Pro was sucking up all the marketing air, so we were kind of left alone to do our own thing since the part wasn't making money yet. This was also during the corporate wide transition to Linux, removing AIX/SunOS/HPUX. I had a Merced in my office but sadly it was running linux in 32-bit compatibility mode, which is where we spent a lot of time fixing bugs because we knew lots of people weren't going to port to IA64 right away, and that ate up a ton of debug resources. The world was still migrating to Windows NT 3.5 and Windows 95, so migrating to 64 bit was way too soon. I don't remember when the linux kernel finally ported to IA64, but it seemed odd to have a platform without an OS (or an OS running in 32-bit mode). We had plenty of emulators, there's no reason why pre-silicon kernel development couldn't have happened faster (which was what HP was supposed to be doing). Kind of a bummer but it was a fun time, before the race to 1 GHz became the next $$$ sink / pissing contest.


I was at HP pre-Merced tape-out and HP did have a number of simulators available. I worked on a compiler-related team so we were downstream.

As for running linux in 32-bit compatibility mode, wasn't that the worst of all worlds on Merced? When I was there which was pre-Merced tape-out, a tiny bit of the chip was devoted to the IVE (Intel Value Engine) which the docs stated were supposed to be just good enough to book the firmware and then jump into IA64 mode. I figured at the time that this was the goal — boot in 32-bit x86 and then jump to 64-bit mode.


> wasn't that the worst of all worlds on Merced?

Yes, yes it was! It ended up playing a much larger role for marketing transition efforts, larger than it should have. But the Catch-22 has been analyzed to death.


Do it, do it, do it!


I will but I want to use it as part of a website relaunch and, for various reasons, the appropriate timing of that relaunch slipped out.


Curious question on the period.

Assuming Itanium released as actually happened... (timeline, performance, compiler support, etc)

What else would have had to change for it to get market adoption and come out on top? (competitors, x86 clock rate running into ceiling sooner, etc)


Well, what actually killed it historically was AMD64. AMD64 could easily not have happened, AMD has a very inconsistent track record; other contemporary CPUs like Alpha were never serious competitors for mainstream computing, and ARM was nowhere near being a contender yet. In that scenario, obviously mainstream PC users would have stuck with x86-32 for much longer than they actually did, but I think in the end they wouldn't have had any real choice but to be dragged kicking and screaming to Itanium.


PowerPC is the one I’d have bet on - Apple provided baseline volume, IBM’s fabs were competitive enough to be viable, and Windows NT had support. If you had the same Itanium stumble without the unexpectedly-strong x86 options, it’s not hard to imagine that having gotten traction. One other what-if game is asking what would’ve happened if Rick Belluzzo had either not been swayed by the Itanium/Windows pitch or been less effective advocating for it: he took PA-RISC and MIPS out, and really helped boost the idea that the combination was inevitable.

I also wouldn’t have ruled out Alpha. That’s another what-if scenario but they had 2-3 times Intel’s top performance and a clean 64-bit system a decade earlier. The main barrier was the staggering managerial incompetence at DEC: it was almost impossible to buy one unless you were a large existing customer. If they’d had a single competent executive, they could have been far more competitive.


> PowerPC is the one I’d have bet on

Interesting to note that all state of the art video game consoles of the era (xbox 360, PS3 and Wii) used PowerPC CPUs (in the preceding generation the xbox used a Pentium III, the PS2 used MIPS and the GameCube was already PPC).


Power.org [1] was a fairly serious initiative to push Power for consoles and the like at one point.

[1] https://en.wikipedia.org/wiki/Power.org


No it could not not have happened.

Address space pressure was immense back in the day, and plain doubling the width of everything while retaining the compatablity was the obvious choice.


> Address space pressure was immense back in the day, and plain doubling the width of everything while retaining the compat[i]blity was the obvious choice.

PAE (https://en.wikipedia.org/wiki/Physical_Address_Extension) existed for quite some time to enable x86-32 processors to access > 4 GiB of RAM. Thus, I would argue that if the OS provided some functionality to move allocated pages in and out of the 32 bit address space of a process to enable the process to use more than 4 GiB of memory is a much more obvious choice.


> Thus, I would argue that if the OS provided some functionality to move allocated pages in and out of the 32 bit address space of a process to enable the process to use more than 4 GiB of memory ...

Oh, no. Back then the segmented memory model was still remembered and no one wanted a return to that. PAE wasn't seen as anything but a bandaid.

Everyone wanted big flat address space. And we got it. Because it was the obvious choice, and the silicon could support it, Intel or no.


PAE got some use - for that “each process gets 4GB” model you mentioned in Darwin and Linux - but it was slower and didn’t allow individual processes to easily use more than 2-3GB in practice.


> AMD has a very inconsistent track record

In what way? Their track record is pretty consistent actually, which is what partially led to them fumbling the Athlon lead (along with Intel's shady business practices).

During the AMD64 days, AMD was pretty reliable with their technical advancements.


Yes, but AMD was only able to push AMD64 as an Itanium alternative for servers because they were having something of a renaissance with Opteron (2003 launch). In 2000/2001, AMD was absolutely not seen as something any serious server maker would choose over Intel.


You're right, there were ebbs and flows in their influence...but they were consistent in those trends. Releasing an extension during their strong period was almost certain to be picked up, especially if Intel wasn't offering an alternative (which Itanium wasn't considered as it was server only).


Apple was fine on POWER


My uninformed opinion: lots of speculative execution is good for single core performance, but terrible for power efficiency.

Have data centres always been limited by power/cooling costs, or did that only become a major consideration during the move to more commodity hardware?


Seeing the direction Intel is going with heterogenous compute (P vs E cores) and their patent to replace hyperthreading with the concept of "rentable" units it seems now that exposing the innards of the CPU (thread director) and make it more flexible to OS control that can use better algorithms to decide where/when/how long.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: