Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> the whole idea hinges on the compiler being able to figure out the correct instruction schedule ahead of time. While feasible for Intel's/HP's in house compiler team, the authors of other toolchains largely did not bother, instead opting for more conventional code generation that did not performed all too well.

I definitely think that keeping their compilers as an expensive license was a somewhat legendary bit of self-sabotage but I’m not sure it would’ve helped even if they’d given them away or merged everything into GCC. I worked for a commercial software vendor at the time before moving into web development, and it seemed like they basically over-focused on HPC benchmarks and a handful of other things like encryption. All of the business code we tried was usually slower even before you considered price, and nobody wanted to spend time hand-coding it hoping to make it less uneven. I do sometimes wonder if Intel’s compiler team would have been able to make it more competitive now with LLVM, WASM, etc. making the general problem of optimizing everything more realistic but I think the areas where the concept works best are increasingly sewn up by GPUs.

Your comment with DEC was spot-on. A lot of people I met had memories of the microcomputer era and were not keen on locking themselves in. The company I worked for had a pretty large support matrix because we had customers running most of the “open systems” platforms to ensure they could switch easily if one vendor got greedy.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: