Is the latency in flash devices really from the charge trap physics itself? I thought it was more in all the overhead to make sure the bit(s) are actually what it seems like. Also, if there were a market for low latency storage surely Optane wouldn't have died and we'd be living a better, more random read/write performant world.
Optane was too expensive for not enough of an improvement. It either needs to be as cheap as flash or it needs to be so much faster that people are willing to pay a premium (say, as much faster than an SSD as an SSD was faster than an HDD).
Well trick is optane was terrible as ram and pricey as storage.
From what I can tell it was targetted:
* where persistence was important, better than ram
* for cases where more ram didn't help
* for cases where adding multiple NVMe didn't help
* for cases where QD1 was very important (as opposed to qd32 throughput).
* for cases that didn't need too much storage, which would break the bank
I got close for various database use, zfs caches, etc. But never quite beat buying more ram or 2-4x the NVMe. I tried. It was a pretty narrow niche.
I find it very frustrating that people expect new technologies to immediately beat existing ones on every metric, when the existing ones have the advantage of a long timeline of iterative refinement. Optane looked amazing, but it just wasn’t given enough time to go through that process. I know it’s a function of how the market works, but it’s still sad to see promising things die on the vine like that.
They don't have to beat on every existing metric. But if they are more expensive for a gain that most people don't care about then they'd better have a niche that pays well to keep them going.
Also, Optane had 5 years. That's a pretty good run for something that never delivered enough to gain its own niche.
Developments in hardware are typically measured in decades, not years, so 5 years isn’t long at all to gain traction and go through price and manufacturing refinements. Again, I understand that’s just the reality of the market, but it would be nice if there was a way around it.
It was more of a problem of Intel, they failed to market it, failed to innovate on the tech, failed to increase yield, failed to increase the demand for it which leads to sending money to Micron for unused capacity. Yes. Failure of Intel CEO.
But the technology also wasn't as promising as people think it is. Z-NAND offer something similar in read, slower in sustainable random write at 50% of the price. In the end even Z-NAND failed to reach any customers. XL-Flash is only thing left. And judging from news I wont be surprised they would stop in 2025 or 2026 as well. Normal NAND is fast enough for most things.
How ever I do wonder in the age of AI if Optane could have a different role.
Optane is still faster for random IO (latency, peak and average) than any pcie5 modern consumer SSD by quite a bit. Some enterprise SSD are about equal but come with equal cost and size constraints.
It’s nice to see how the AI ‘hype’ is accelerating (no pun intended) technology like storage…but at what point does processing become the bottleneck and not storage?
Is that not already the case?