Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't most software can scale to 10 GPUs out of the box. AFAIK, it would be hard to even find a motherboard that would fit them. However, a company could conceivably buy a workstation with 256GB RAM, two 32-core Threadripper CPUs and four Nvidia 2080Ti. That would definitely put you a few years ahead of the average "consumer PC" or next-generation console.

Sidenote: I've read that John Carmack and id Software liked to develop on workstations that were "ahead of the curve" that way. It gave them an edge, in that they were able to develop future games for hardware that didn't yet exist, but knowing that consumer PCs would eventually catch up.

I think what made these SGI computers really amazing at the time is that there was no such thing as accelerated 3D graphics in the consumer market at the time (or much real-time 3D for that matter). They also had a cool Unix operating system with a UI that was way ahead of anything you could get on a consumer PC. I can also imagine that it was a much much more comfortable development environment than developing on say, MS-DOS, which didn't even have multitasking.



That's a good point but underselling it. My favorite thing about them was doing a single, system image that removed bottlenecks and complexity for developer at the same time. The PC's were using slow or redundant buses to connect high-speed components. SGI removed redundancies, used fast interconnects (GB/s), made them low-latency (microseconds vs milliseconds), and NUMA-enabled them. The last part meant that sending data to nodes didn't take middleware like MPI: you might just do a load and store instruction like on single node. Hardware took care of communications with cache-coherency. You did have to design for good locality to minimize moving data across nodes, though.

Add the other features like reliability (esp hot-swapping), servicability, and security (Trusted IRIX) to have some incredible machines. I always wanted inexpensive hardware with hot-swap, RAID, and something like NUMAlink connecting it. Never quite got that. One company did make a NUMA for AMD and Intel:

https://www.numascale.com/Scale-up-products/


I guess that's the difference between a workstation that's designed for performance and versatility before cost, and a PC, which is made to be affordable first. When the PC industry started, it was very much about repurposing whatever low-cost CPUs and off-the-shelf components were available, and finding ways of packaging this into a semi-usable machine for less than $1000. Things have changed quite a bit since, but much of the do-it-cheap rushed-to-market type of compromises are still with us.


Sure, the PCs that started the PC industry — things like the Apple I and the MITS Altair — were indeed "about repurposing whatever low-cost CPUs and off-the-shelf components were available, and finding ways of packaging this into a semi-usable machine for less than $1000." But, long before 1993, most CPUs and components used in PCs were being produced specifically for PCs, with uses in things like lab equipment, industrial control, and workstations a rather smaller secondary market.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: