Hacker News new | past | comments | ask | show | jobs | submit login
SPARCbook 3000ST (triosdevelopers.com)
200 points by Breadmaker on March 29, 2019 | hide | past | favorite | 98 comments



Fun. I was one of the software developers on the SPARCbook series (prior to the acquisition), as well as the Alpha machine for Digital, the P1000/P1300 (Pentium), and IBM (PowerPC).

Great memories of a small team taking on huge challenges.


I ran Alpha servers back then along side a bevy of DG/UX Aviion and Clariion servers. The DECs always out-performed the DG/UX boxes.


I never used an Alpha, but somehow I fostered such an inflated impression of their performance that if someone told me today that their new i9 wunderbeast was almost as fast it would take a little while before I didn’t believe them.


Well a lot of the Alpha engineers ended up working on and helped design AMD's AMD64 extensions and hypertransport bus, after AMD adopted the Alpha's EV6 bus for the Athlon you could say Alpha technology and engineers had a huge effect on why AMD bested Intel for quite awhile back in the day.

If it wasn't for the Digital and AMD engineers we'd all be suffering with IA-64 etc... I think it's more likely PPC would have remained a force to be reckoned with in the Desktop space.


Hey that’s some cool background info, thank you!!


The Alpha memory model has always scared the bejesus out of me. I bet it makes things fast though.


fast and stable. Before I left that company, one box had an uptime of over 14 months.


That's awesome! My grandpa had an alpha desktop back in the '90s and I loved that thing. It was my first non-apple or ms-dos experience so I have a soft spot, and I haven't heard anyone mention those in years.


Starting in the mid-1990s you could find amazing deals on SPARC and Alpha “desktop” motherboards. They were both great architectures in many ways, interesting platforms for hacking, and sometimes much faster than Intel boxes of the time for certain workloads.

I think the brief affordability of relatively recent vintage non-Intel hardware was a combination of the sudden growth of Internet 1.0 funding fierce competition between hardware companies (Sun was taking Java to the bank) plus Microsoft and IBM trying hard to get out from under Intel’s desktop hardware monopoly.

I miss those days! It felt like CPU hardware was this amazing frontier that just kept expanding every day. On the other hand, now you can buy a $10 embedded ARM board that’s faster than a high end AlphaStation from the late 1990s, so that’s definitely just 100% pure nostalgia for my misspent youth.

When I got to the Media Lab as a 21 year old grad student in 1996 I had never actually physically seen a Sun or DEC box. Lots of my new colleagues had worked at tech companies between undergrad and grad school, so they sometimes had old workstations and terminals in their apartments. I remember seeing a headless SPARCStation 10 under a coffee table (presumably running running mail, DNS, and Usenet for a few folks) and being blown away. I mean, you could put 512 MB of RAM in that thing.


I remember the Alphas in the computer lab at school. They seemed nice enough but didn't seem that much faster than the Pentium chip in my personal computer and put out a lot of heat. Given the price of them vs. a Pentium 100 with FreeBSD it really was the end of days for the Unix Workstation.


The Pentium was really the Amiga of its time - it was a remarkable technological achievement & the first x86 CPU that could compete with workstation RISC CPUs, which was far from a trivial achievement.

If the 486 was the final nail in the m68k's coffin the pentium's was the same for the high-end RISCs.


It was actually reasonably easy to assemble one yourself. A company I worked for bought a bunch of motherboards with 21164 (EV56) CPU's. I took one home, and was surprised to find at the time it was reasonably easy to get it into an ATX case, install RAM, CPU and Debian on it.

I have fond memories of building Gnome 1.x from sources on it. It felt really fast compared to the Pentium 2 and 3's of the day.


Any stories come to mind to share?


Many.

The initial PowerPC port was done by three people: Jes Wills, Andy Giles and Tony Sumpter. When they traveled to Austin, TX to present the work, the AIX software manager who had been in-charge of the ThinkPad Woodfield (6020) effort wanted to know where the rest of the team was. Her team was over 40 people, and they'd taken over 18 months to get to a port.

The Tadpole team for the initial port was 3 people, and had taken six months.

---

There was a ritual on the engineering team that you were to drink a "gallon of Abbot" on your 30th (birthday). The gallon, of course, was an Imperial gallon (8 20oz pints, or 4.54 litres. The Abbot was Greene King's Abbot Ale, which is 5% ABC.

You have to start early to get it done, and it still wrecks you.

---

The US Navy was very upset that the Tadpole case was made of AZ91D magnesium (alloy). Wouldn't let it on-board anything because Mg + 2H2O → Mg(OH)2 + H2.

Tadpole used it because it has similar density (1.81 g/cm^3, 1.046 lb/in^3) to ABS/PC blend with 20% glass fiber (1.25 g/cm^3), and it has high heat conductivity, so we used the case as a heat sink. AZ91D has an ignition temperature of approximately 468°C (875°F), which is difficult to reach and maintain due to magnesium’s high heat conductivity.

---

A magazine (I forget which one) wanted to do a story on how the laptop would stop bullets. So we took a couple dead systems to the local gun range (Red's Indoor Range in Austin, TX) and tried it out. It wouldn't stop a .22, but that didn't stop us from blowing the hell out of the rest with larger calibre firearms. (Several of us were good friends with the people at Red's back then.) Whenever the Brits came over, they wanted to go to Red's, and we were all too willing to take them. The engineering area in Cambridge (England) was decorated with the silhouette targets they took back to show their efforts.

---

George Grey (CEO) and Bob Booth (CFO) went on to found and run GeoFox who had a Psion 5 work-alike. (http://www.ericlindsay.com/epoc/geofox.htm). It didn't work out, so George became the President of Psion USA, and then the President and CEO of SavaJe, which became JavaFX Mobile after it was acquired by Sun. Savaje apparently inspired Android. https://www.bizjournals.com/boston/blog/mass-high-tech/2010/...

George and Bob were most recently at Linaro, and are now at foundries.io. Both in the positions they had at Tadpole.


Thanks!


Tadpole I am pretty sure did the N40, and then IBM did the 800-series themselves right?


I think that's right.


IIRC, yes.


I used a SPARCbook back in 1996 for a business trip to Germany to install our search engine at t-online.de. The CFO of the company wanted to know who had this machine at all times because it was so expensive, around $20k USD. It didn't have any usable battery life, if you tried to do any type of development, it would last about 20 minutes before it shutdown. The many disk partitions made us get creative to fill the 1.2GB disk with data. We had to put files all over the place, and use softlinks get our software to work. This odd setup lead a sysadmin to "clean things up" one afternoon. He delete most of the OS in the process. He was really mad because he had to stay up all night to do a reinstall.

Wow, I forgot about that machine until this post.


The initial sparcbook had a 20MB drive, and we had to squeeze SunOS 4 onto it.


> "A whopping 128MB of RAM (in 1997, this unimaginable in a laptop)" [sic]

Really? The PowerBook in 1997 supported up to 160 MB: https://en.wikipedia.org/wiki/PowerBook_G3#Models

I don't think this was particularly unusual. HP's laptop of the day supported up to 160 MB, as well: http://www.computinghistory.org.uk/det/37569/HP-OmniBook-570...


At the time I remember being amazed that the company I worked for paid out $$$ to get some SGI machines up to ~160Mb of RAM. This was because the software we used had a memory leak of some sorts and needed such an extravagantly 'wasteful' amount of memory.

Having the sockets to support up to 160Mb was one thing, having long enough arms and deep enough pockets to pay for it was something else. Hence 'unimaginable'.

Nowadays you get a couple of sockets on your laptop, back then whether laptop or something else, a workstation would have lots and lots of sockets for memory, usually with very few of them filled.


I was a student at the time, and all the computer science researchers I knew were maxing out their laptops with 128 MB.

In the June 1998 MacWorld, you could order a 128 MB upgrade for your brand new PowerBook G3 for $389 [1]. Not cheap, but not that extravagant.

[1] https://archive.org/details/MacWorld_9806_June_1998/page/n13...


Ah but that was for a Mac. In the days before the internet as we know it you didn't have commodity RAM for workstations. You had to pay extra or risk voiding your warranty. Plus someone else was paying so why run the risk?

Sure the chips were commodity but the modules would have a different pinout. They would be advertised as better.t

Another problem was that the more memory you bought the more it cost per megabyte. A set of chips to fill the sockets with 128Mb would cost three times as much as a set of chips that filled the sockets to give 64Mb.


In 1997, a powerful desktop PC had 64 MB. 128 MB was definitely workstation-class, and extremely rare on a laptop. For reference, the SGI O2 workstation came out in late 1996, and standard configurations were 64, 128 and 256MB of RAM, and maxed out at 1 GB (which cost several thousand dollars). The previous SGI Indy (1992 to 1996), still a real beast in 1997 if you had the 180Mhz R5000 version, maxed out at 256 MB.

Remember that back then, hardware evolved really fast; I had a 128 MB laptop in 2000. So it went from "unheard of" to "standard" in 5 years or so.

My current work machines have had 8 GB for 7 years :)


Back in the late 90s most consumers and even some power users were using 486 laptops (myself included) because Pentium laptops were still pricey. Affordable laptops back then were usually a generation behind desktops in speed and hardware support; there wasn't a mobile culture like there is today. When you did get a PowerBook or HP like you mentioned, you paid through the nose for it.

I was using a TI TravelMate 4000M (made by Acer) from 1995 to 1998, it was a graduation gift from my estranged father. It propelled me into modern (for the time) computing and set me on the course to the career I have today. I never upgraded beyond 8MB of RAM (it came with 4MB on board and supported a 4MB or 16MB additional module) but that was enough to do what I needed at the time.

I would have loved to have an advanced workstation laptop like in the OP article, but I didn't have $21k laying around for something like that.


>> "A whopping 128MB of RAM (in 1997, this unimaginable in a laptop)" [sic]

> Really? The PowerBook in 1997 supported up to 160 MB

I think it would have been pretty unusual to have that much RAM installed at that time. I remember my parents upgrading our PC from 8MB to 16MB at about that time (or maybe 16MB to 32MB), and that was more than any of my friends had.


The PowerBooks weren’t that far away from the Sparc boxes. I used to sell them at retail in college to doctors and similar folks. They’d frequently walk out dripping $10k on the Amex on the device, software, gear, etc.

I had a Toshiba Tecra that was loaded from a memory perspective and that device retailed around $4k.


Actually the TurboSparc based 3TX and 3000ST supported 256MB via undocumented factory upgrade... not sure how they did it exactly but it was official.

Might have been something as crazy as installing ram with a custom address line soldered on etc...

The 3GX from 1994 also supported 128MB but the simms had to be low profile and fast enough... so a bit hard to source.


I've got a couple Tadpoles, including a PA-RISC PrecisionBook, but actually I like the Sun Ultra-3 better. The one I have is a rebadged Tadpole Viper in beautiful purple. The 1.2GHz CPU gets stinking hot unless you throttle it and the battery life is about LOL minutes, but it can still run relatively recent Firefoxes and Solaris is still useful.


I had a SPARCstation Voyager for a while when I worked at Sun. One of the very first all-in-ones ever with a color LCD screen. (My "regular" Ultra2 desktop also had a couple of 24" HD monitors, which weighed more than a sack of concrete each, and were much bulkier, too.)


The Voyagers are lovely machines, but my personal favourite SPARC luggable is the Solbourne S3000 with the flaring orange gas plasma screen. It looks like no other workstation.

http://www.floodgap.com/retrobits/solace/


I have a non-rebranded Viper and before that the Sparcle. I loved the thing but the 4mb of video memory made it unusable for daily driving.


Oh, and a SAIC Galaxy 1100 arrived today. Still running HP/UX but as TAC-4. MIL-SPEC goodness.


Little-known story about how this little box saved the shuttle program: When Dan Goldin took over NASA, he mandated that all of Mission Control would run on Windows NT. (This was back when the NT kernel was excellent, as it always had been, but the surrounding bits had serious stability problems). The shuttle astronaut corps rebelled, literally refusing to fly if forced to bet their lives against the BSOD. The solution was to provide the astronauts with their own independent copy of the software that Mission Control previously ran on their SPARCstations, so even if MCC crashed, the shuttle wouldn't. Since the SPARCbook was already flight-qualified, the astronaut revolt was quickly and quietly resolved, with almost no one even being aware of it...


That is awesome!


Very cool!


> Normally this isn’t a problem because you can boot a Solaris installation CD or network image and clear the root password in the /etc/shadow file. BUT....I have no SCSI CD-ROM that I can plug into the external SCSI port on the SPARCbook

That brought back a memory. I was working at a place in the 90s that got one of those SPARCbooks that we were setting up for a customer, but we didn't have a SCSI CD-ROM drive to install with. I actually went to my previous employer (a Sun workstation support team in the computer center of a university) who kindly let us use one of theirs on site.


At some point I lost my lust for obscure hardware like this, although I remember what it felt like. I'm really disappointed he didn't take an image of the disk as it was when he got it, because it would have been really interesting to root through all the old Nortel stuff.


It's so easy to have a SPARCStation emulation with QEMU these days, it's difficult to justify having ancient hardware taking up space in the house.

Netscape 4.76 on Solaris 8. It took a while to find a website that still rendered.

http://www.w6rz.net/netscape.png


It's so easy I've got a client who runs old production Solaris/Sparc code that way!


> The first thing I did was make a complete clone of the hard disk (c0t0d0s2) on my Solaris 10 desktop in case anything went wrong

I think he did.


Very prudent. I don’t see a torrent magnet link though :)


I still have all the code for sure - I spent a lot of time pouring over it recently. They spent a lot of time creating elaborate software-based mobile test environments to test it, and I get the impression that the project was more of a proof-of-concept thing as a result. The code was nicely modularized and somewhat tight (with a few things that drove me nuts).


Oh man, I remember seeing one of these (or something really similar) when I went with my dad to work as a kid. When he told me that it cost more than $19,000, I flatout didn't believe him. To a nine-year-old me, there was not an appreciable difference between $19,000 and $1,000,000,000 , and I could not fully understand how anyone would pay that much for anything.


It's a more staggering number now - I get $34k using the CPI inflation calculator. Can you imagine paying that much for a single laptop? Does anyone even make laptops that cost that much now?


I still like OpenLook/OpenWindows best of all the 80s/90s windowing system attempts on Unix. Far preferable to Motif & its successors.


The display in the images looks quite sharp - resolution of 1280x1024! Not too awful by today’s standards. Lenovo’s thinkpad x280, currently available with a retail price above 1000 euro, comes with a 1366x768 display as the default option.


The display looks quite nice, but according to the manual I found [1], it's only 800x600 in the "ST", or 1024x768 in the "XT" variant.

[1]: https://www.manualslib.com/manual/1454949/Tadpole-Sparcbook-...


Judging from the photos that display is obviously not 1280x1024.


I don't think 1280x1024 flatscreens existed in 1997. A 1024x768 display was pretty extravagant on a laptop.


The ThinkPad 770X had a 1280x1024 display option.

https://www.thinkwiki.org/wiki/Category:770X


It's not from 1997. Pentium II mobile was introduced in 1998.


I had a Thinkpad A20p in 2000 with an awesome 15" 1400x1050 display. That was a great laptop.


Yup - the default is 800x600 on the LCD, but I changed it to 1024x768 fine (you just have to ensure that you select the option that outputs video simultaneously to the LCD and external VGA port to get it to work). I've since riced the desktop quite nicely and it's pretty spiffy. DOOM runs very well on it too.


I run FreeBSD on a used ($620 on eBay) x270, g6 i7, 16GB DDR4, 512 GB NVMe drive, and a 1920×1080 screen.


Ahem...the Fujitsu CPU in this was not the fastest SPARCv8 implementation around-IIRC, Ross made a HyperSPARC that ran at a galloping 200mhz!!


I never really understood splitting off /opt and /usr from / when they're all partitions on the same drive.


ZFS takes this concept up one level of abstraction. You have a `zpool` which is a collection of disks. You then have "datasets" which would be analogous to partitions. One property of a dataset is a mountpoint, but they have many more. In ZFS different datasets can have different compression algorithms, different checksum algorithms, different log (journal) characteristics, different ACL types, it can also manage exporting the dataset over the network (setup NFS/CIFS for you), encrypt the dataset, etc.

That might give you a glimpse into why it's useful to split off different datasets. I might want to tune my database (/var/lib/postgres) for throughput, while my home directory is tuned for maximum compression & encrypted, while my public fileshares are unencrypted, etc.

Also it's often useful to have different partitions if you need different filesystems, not every filesystem is suited for every particular usecase. Also sometimes you're constrained by other software: for instance many older bootloaders would typically have very limited filesystem support; even today your EFI system partition needs to be FAT formatted, which should serve as an obvious reason why you'd want to segregate `/boot/efi` from the rest of your system.


Sure, but none of that applies to this laptop, where they're all UFS.


If one filesystem gets full, you still can do some work on the others. You can also more easily umount a filesystem to fsck without the need of booting a live media. You can mount filesystems with different flags, like readonly on /boot to prevent accidents.

Modern smartphones do this, on Android for example you are able to find a filesystem for system binaries generally mounted readonly, a filesystem for the base OS and system apps, another filesystem for user installed apps. If a rogue app fills the entire filesystem it's resided, the system apps can still function.


Also, if you corrupt one filesystem you don't necessary trash all of your work. Especially if you corrupt the root filesytem all you need is to reinstall the OS and all of your data is still intact without having to mess with your backups.

For the most part however people have decided that slicing up your disk into multiple partitions isn't worth the hassle anymore, and almost all distros just dump everything on a giant /.


To be fair, the partitioning UX in both the FreeBSD and Debian (possibly linux in general) is scary.


It used to be scary on debian, but there's now a non-scary one starting from the etch release (Debian 4.0). The new UI is quite clear and comfortable to use, even in the "expert" mode.


What’s to be afraid of at install time? If you make something too big/small just do it again.


/opt is where several commercial packages from Sun would install, so by being separate if you updated your OS then you wouldn't necessarily have to reinstall NeWSprint or separate products that were, honestly, a complete pain in the ass to install.


Yeah... It makes more sense for /var and even /usr/local. / and /usr are both managed by the OS installation. /opt makes a little more sense because it's often used by large, third party packages any one of which could be larger than the whole of / and /usr together.

If / or /var run out of space (even now) then lots of daemons get into quite a bit of trouble. Anything from hanging to consuming a lot of CPU in the disk allocator.

It's often not possible to ssh into a machine where / or /var has filled up.

There was also the problem of file system robustness. Whilst things were a lot better than the non-UNIX platforms of the day, things weren't quite as good as they are today. These filesystems often were not journalled. That meant that if you had a power failure you could lose the entire volume. (I've personally had at least one / partition be destroyed by fsck after a power failure. I was so, so glad I didn't lose /home. /opt and /usr/local as well!)

Depending on whether you were the user or the administrator dictates which partition you prefer to survive, but at least it adds some robustness. It's nice to have things separated based on how you will restore them. / and /usr will come from the vendor. /opt will probably be a whole load of different media from all over the place.


In some Unixen (like OpenBSD) it's done so that you can enforce different partition-wide security policies (e.g. "this partition should never have executable files").


A bit of historical legacy from NFS workstations, where it was useful to have root and swap local* (for speed), but /usr common across many workstations. Those separations kind of became habit.

*Of course it was possible to have root and swap on NFS as well (for truly diskless).


Swap on NFS sounds kind of crazy.


I would think the opposite. Presumably there's zero contention so no need to worry about concurrency control like locking. You're just paging linear blocks of memory to disk and the bookkeeping remains in local host memory, so all you're doing is sending and receiving fixed, independent blocks of memory that are opaque to the remote host. And prior to v4 NFS was UDP and stateless, so the code to do this from the kernel could be kept quite simple.

The functional equivalent to this is iSCSI, which as far as I understand is similar to if not literally what AWS uses under the covers for EC2 block devices.


> The functional equivalent to this is iSCSI, which as far as I understand is similar to if not literally what AWS uses under the covers for EC2 block devices.

Back when disk-SSD hybrid storage was still a novel idea (~2006), one of the guys at Sun had an interesting demo with ZFS.

He first created a bunch of storage pools on a server on the east coast from iSCSI volumes on the west coast, install Postgres, and did a benchmark. He next created a pool with the same iSCSI pools but added a local SSD as a ZIL/zlog device (for caching), and ran the same benchmark.

The SSD-enabled pool has quite close to local-disk performance of a SAN.


Disk was expensive back in the day.

Depending on your compute needs, it could have made more sense to do fully-remote X.

So if many people were running intensive like MATLAB or Mathematica, then root/swap-over-NFS made sense so that you could use your local workstations local CPU all to yourself(s) and not bog down the central server. If most people were doing modest things (xterm, mail, browsing), then running everything on the central server and displaying it on dumb X11 terminals wasn't a bad idea.


I've done swap on an nbd share... with the nbd from busybox no less... on a SparcStation LX with Gentoo (compiled on a T2000)


Splitting /usr off is mostly done for access control. Plus if the directories are already split up you have the option to move them around later. /opt is usually for 3rd party apps that arn't part of your OS distribution. Like apps that come with precompiled binaries and libraries you want to make available system wide for example.


Drives used to be really freaking small


So? All the more reason to keep it on one partition to better use the space more effectively?


Run a unix machine out of disk space.

Caveat - run a Unix machine you don't care about (i.e. a disposable VM) out of disk space.

It's much easier to recover a machine if the storage is segmented. Yeah, a single partition is more convenient, but multiple partitions are more resilient.


Two of the reasons in this era were prioritizing which filesystem you wanted to overflow first, and because you necessarily used network mounts for lots of stuff (home dirs, project dirs, GNU tools, expensive software), even on workstations.

Sun was hugely into NFS, which was both a blessing and a curse -- you'd spend 4-5 figures each on a fleet of workstations with only 105 MB local drives, and they'd tend to all end up hanging frequently, because of a server burp or network burp. It didn't even have to be a burp on a filesystem you were using -- it only had to be (stale) mounted.

(I'm not this old, but I had the excessively good fortune to have access to super-cool computers and Internet as a kid.)


It really depends on what you're doing. On a desktop/laptop/workstation 1 partition is fine. On servers, all the data is not necessarily on the same disk. It's common to have the OS itself running on local disks and have everything else mounted on some non-local storage like a Storage Area Network, Network Attached Storage, NFS shares, etc.


Sure, I'm talking about this laptop specifically which has one HDD, and partitions it heavily.


It is a lot easier to upgrade an OS on a Unix box with proper partitions separating things you want to persist across installs from the OS. Makes backups and backup restores easier as well. It’s also best to have swap space be a partition. Though these days probably doesn’t matter.

A modern Windows install has something like 4 partitions these days for the c drive. Ones labeled restore, c, and I have yet to care to figure out the other two. On boot though you see just c.


Sorry about the delayed response. I misunderstood what you were getting at. It's because it's running Solaris and Solaris had different partition scheme compared operating systems now. There's wasn't really a "desktop" version of Solaris so it would just slice up the disk the same way it would on a single disk server. It's a really old school OS.


The primary user if these laptops was the U. S. military and they saw combat on the battlefield. Garrett D'Amore, father of the illumos project, used to work on the drivers for this laptop at Tadpole.


I wanted one of these really bad back in the day. Had to settle for running Solarisx86 on a generic laptop instead. Nice find.


I have one still and a Tadpole laptop as well


Can't we just somehow copy that beautiful keyboard/case to make a modern open implementation of it? We can upgrade all the internals after copying the original format, and do all the prototyping with 3D printing.

Please tell me someone else has a screw loose and wants to do this too.


Now there's a blast from the past. As a young researcher, I got to have a Sparcbook circa 1994 to take to events and demo the multicast multimedia conferencing software we were working on. It cost about five times more than everything I owned put together, and I was always afraid I'd forget it in the pub or something. The alternative was traveling with a Sparc 20 in my hand luggage, which I did too on many occasions. People were always more impressed by seeing video on a laptop though. These days, you think nothing of it.


Oh man - we had one of those tadpoles back in the dot com days! We got our Sparc Oracle/Weblogic stack loaded up on it just like our 'real' servers. Our CEO took it to a very important conference to do a live demo, took to the podium, attempted to adjust the microphone... and the cord tipped a cup of water onto it. Killed the laptop, but he still pulled off what he was pitching. We were all hoping to run off with it after the event.


I saw one of these last weekend at the PNW Vintage Computer expo! I coveted those really badly back in the day.


I remember those.. I used to have one, it was so unbelievably useful. I ended up turning it into a fastboot server in an emergency. Then... we spent ages making it Linux :)


I had a similar kind of SPARC portable computer running Solaris at a job in 1999. I didn't actually use it for my job. At that point, I had it as a curiosity.


I used to work at a used computer store that recycled lots of e-waste. Got to see a few different SPARCbooks but sadly none of them worked.


I loved the Sparcbook I used, it was great! This reminds me that I should probably fire up my Sparc LX with its whopping 64MB of ram...


I knew a guy who travelled with one of these (probably 97 or 98?). I thought he was crazy but it did have nice specs.


A bit unrelated, for a while I tried to get hold of one of those Toshibas being sold by Sun with Solaris x86 on them.


If anyone has one of these for sale, please contact me (check my profile).


I haven't seen one of those in a few years ;)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: