Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, those claims were a bit much, and in fairness jart chimed in to say so too. [1]

fwiw, I'm not a ML person, but it doesn't seem entirely crazy to me to think that SSDs are becoming fast enough that you could avoid keeping a huge model in RAM in some cases. Especially if "computational SSDs" (SSDs that can do some basic first-stage computation without transferring the input data over PCIe) ever become common. (I think some of the ML accelerators for sale today might be approximately this.)

[1] https://news.ycombinator.com/item?id=35393615



much of performance in computing is about moving the memory hierarchy around in ways that are inconvenient to programmers.

I made an SSD into a spare swap device, and basically treated my system as having RAM+SSD's worth of RAM. It allowed me to finish a few big jobs (~96GB RAM) overnight that wouldn't have otherwise.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: