At some point the rest of us have to stop paying for people's stupid behavior.
US taxpayers have funneled untold billions of dollars into these states to prop up their real estate industry by providing FEMA flood insurance at hugely subsidized rates, while the private insurance that's becoming unaffordable only covers wind damage. When you see those photos of towns wiped away by a hurricane, that's flood damage, and your tax dollars are paying to rebuild those houses in the same place. Over and over.
In a just world there would be legal liability for some of the real estate industry folks who persuaded people to put all of their net worth into an asset that's going to be blown away in the next hurricane, and other consequences for the politicians who enabled and encouraged them. The odds of this happening in real life seem to be about zero.
Note that this is very early work - one of the papers (the HUST one from this June) shows an 8x8 cell device, i.e. 64 bits in SLC mode.
DRAM kind of plateaued in 2011, when it hit $4/GB; since then it's gotten faster and bigger, but not appreciably cheaper per bit.
This could change if there was a way to do 3D DRAM, like 3D NAND flash, but that doesn't appear to be on the table at present. Note that this isn't the "stacking" they talk about with IGZO-DRAM, where they build layers on top of each other - it's not 3D stacking itself that made flash cheap.
Flash got insanely cheap because of the single-pass 3D architecture - it's pretty cheap to put a large number (~400 nowadays) of featureless layers onto a chip, then you drill precise holes through all the layers and coat the inside of the hole with the right stuff, turning each hole into a stack of ~400 flash cells.
The cost of a wafer (and thus a chip) is proportional to the time it spends in the ultra-expensive part of the fab. 3D NAND puts maybe 100x as many cells onto a wafer as the old planar flash (you can't pack those holes as closely as the old cells), but what's important is that the wafer only spends maybe 2x as long (I'm totally guessing here) in the fab. If it took 100x as long, laying down a few hundred layers, the price advantage would vanish.
3D ram stacking still would have significant benefits since the amount of board space taken up by RAM is significant. Quadrupling capacity per area would be a game-changer for GPUs with HBM, and could allow for a CAMM like standard to make it's way into servers.
They can do it with 3D NAND because the electrons are injected into the charge storage medium through brute force. The problem is that the capacitance scales with area. We're reducing the node size but now the aspect ratios are insane and the trenches for the storage wells are >3um high. That's over 1,000 times thicker per layer compared to NAND.
Related to this, I remember seeing a research talk that fairly convincingly demonstrated that almost all performance differences between several ARM amd x86 CPUs were explained by microarchitectural features (branch predictor type and size, etc) rather than ISA. There was one benchmark affected by a deficiency in the ARM ISA, but that’s probably fixed by now.
Relevant to this: memory cost per GB dropped exponentially every year from the days of the earliest computers until 2010 or so, reaching $4/GB in 2011. A decade and a half later it's still in the $2-$4/GB range.
Note also that SSDs started out only slightly cheaper per GB than DRAM - the 80GB Intel X25-M had a list price of about $500 when it was released in 2008, and references I find on the net show a street price of about $240 for the next-gen 80GB device in 2009. Nowadays you can get a 1TB NVMe drive for about the cost of 16GB of RAM, although you might want to spend a few more bucks to get a non-sketchy device.
I worked for 5 startups before I went back to grad school and then entered academia; it was over a quarter century ago but I think some of the lessons remain valid.
The best startup I was at was one where four engineers who knew each other had dropped out of a big company and started with a consulting project, developing the first version of the product for an early customer (a national lab) using FPGAs. Then they got venture funding to develop an ASIC version, which is when I got hired as employee #12.
The next best one started when a bunch of friends from undergrad - mostly engineers but one with a business degree - convinced a sales person to go in with them on a startup.
In both cases they didn't have to hire a founding engineer - the founding engineer or engineers were part of the original group that got seed funding. Some of the later hires were quite good, and rose to the level of some of the founders or higher, but their success wasn't dependent on the supernatural ability of someone they hadn't yet identified or hired.
To be honest, the whole idea of "I have a great idea, but don't know how to translate it into product, so I'll hire people to do that" seems like a recipe for disaster in so many ways.
Yeah there's a reason that "ideas guys" are memed to death online. It's very easy to have a great idea, the skill is in selling that idea to VCs, customers, friends and family etc.
The FTL executes on the SSD controller, which (on a DRAM-less controller) has limited on-chip SRAM and no DRAM. In contrast, a controller for more expensive SSDs which will require an external on-SSD DRAM chip of 1+GB.
The FTL algorithm still needs one or more large tables. The driver allocates host-side memory for these tables, and the CPU on the SSD that runs the FTL has to reach out over the PCIe bus (e.g. using DMA operations) to write or read these tables.
It's an abomination that wouldn't exist in an ideal world, but in that same ideal world people wouldn't buy a crappy product because it's $5 cheaper.
One of the Japanese sites has a list of SSDs that people have observed the problem on - most of them seem to be dramless, especially if "Phison PS5012-E12" is an error. (PS5012-E12S is the dramless version)
Then again, I think dramless SSDs represent a large fraction of the consumer SSD market, so they'd probably be well-represented no matter what causes the issue.
Finally, I'll point out that there's a lot of nonsense about DRAMless SSDs on the internet - e.g. Google shows this snippet from r/hardware: "Top answer: DRAM on the drive benefits writes, not reads. Gaming is extremely read-heavy, and reads are..."
FTL stands for flash TRANSLATION layer - it needs to translate from a logical disk address to a real location on the flash chip, and every time you write a logical block that real location changes, because you can't overwrite data in flash. (you have to wait and then erase a huge group of blocks - i.e. garbage collection)
If you put the translation table in on-SSD DRAM, it's real fast, but gets huge for a modern SSD (1+GB per TB of SSD). If you put all of it on flash - well, that's one reason thumb drives are so slow. I believe most DRAM-full consumer SSDs nowadays keep their translation tables in flash, but use a bunch of DRAM to cache as much as they can, and use the rest of their DRAM for write buffering.
DRAMless controllers put those tables in host memory, although I'd bet they still treat it as a cache and put the full table in flash. I can't imagine them using it as a write buffer; instead I'm guessing when they DMA a block from the host, they buffer 512B or so on-chip to compute ECC, then send those chunks directly to the flash chips.
There's a lot of guesswork here - I don't have engineering-level access to SSD vendors, and it's been a decade since I've put a logic analyzer on an SSD and done any reverse-engineering; SSDs are far more complicated today. If anyone has some hard facts they can share, I'd appreciate it.
I dont buy this. There are plenty of dramless SATA SSDs which should be impossible if your description was correct, not to mention DRAMless drives working just fine inside USB-NVME enclosures.
>but gets huge for a modern SSD (1+GB per TB of SSD)
except most drives allocate 64MB thru HMB. Do you know of any NVME drives that steal Gigabytes of ram? Afaik Windows limits HMB to ~200MB?
>Finally, I'll point out that there's a lot of nonsense about DRAMless SSDs on the internet
FTL doesnt need all that ram. Ram on drives _is_ used for caching writes, or more specifically reordering and grouping small writes to efficiently fill whole NAND pages preventing fragmentation that destroys endurance and write speed.
Are you talking about the fact that NVMe works by MMIO and DMA? So is pretty much any SATA controller, so there's no inherent difference there (there are _many_ years since the dominant way of talking to devices was through programmed I/O ports). Unless you have a NVM device with host-backed memory (as discussed elsewhere in the thread), it's not like the CPU can just go and poke freely at the flash, just as it cannot overwrite a SATA disk's internal RAM or forcefully rotate its platters. It can talk to the controller by placing commands and data in a special shared memory area, but the controller is fundamentally its own device with separate resources.
In theory data center cooling is simple - CPUs run at 60-70C, while the outside ambient is usually 30C or less, so heat should just "flow downhill" with a bit of help from fans, pumps, etc.
The problem with using air cooling to get it there is that the humans who run the data center have to enter and breathe the same air that's used to cool the computers, and if your working fluid gets too hot it's quite unhealthy for them. (we run our hot aisles at 100F, which is a bit toasty, and every third rack is a heat exchanger running off the chilled water lines from the outside evaporative cooler, modulo a heat exchanger to keep the bird shit out)
We're not going to be able to pump much heat into the outside world unless our working fluid is a decent amount hotter than ambient, so when it gets reasonably warm outside we need to put chillers (water-to-water AC units) in the loop, which consume energy to basically concentrate that heat into a higher-temperature exhaust. When it's really hot outside they consume quite a bit of energy.
If the entire data center was liquid cooled we could have coolant coming from the racks at a much higher temperature, and we'd be able to dump the heat outside on the hottest days without needing any chillers in the loop. As it is we have some liquid cooled racks running off the same chilled water lines as the in-aisle heat exchangers, but the coolant temp is limited by the temperature of our hot aisles, which are quite hot enough already, thank you.
15 years ago IBM installed a supercomputer at ETH Zurich that used 60C hot water as its coolant, with a heat exchanger to the building’s hot water system (which is typically somewhat less than 60C) https://en.m.wikipedia.org/wiki/Aquasar
> CPUs run at 60-70C, while the outside ambient is usually 30C or less, so heat should just "flow downhill" with a bit of help from fans, pumps, etc.
This isn't how you think of heat flow. The CPUs run at a given power. Their temperature will depend on the ambient temperature, and on the thermal impedance between them and the ambient. If the thermal impedance is too high, they'll be too hot and die.
A gas-based design seems like it would be better at a small scale - e.g. the facility in the link has a reservoir the better part of a mile away from the turbines, and has a max output of 600 MW or so.
CO2 may actually be a good working fluid for the purpose - cheap, non-toxic except for suffocation hazard, and liquid at room temperature at semi-reasonable pressures. I'm not an expert on that sort of thing, though.
It's not just law enforcement and sentencing - there are verifiable numbers for the results of certain crimes - homicides and auto theft come to mind - and most have declined precipitously.
E.g. Boston had 1,575 reports of auto theft in 2012, compared with 28,000 in 1975; Massachusetts had 242 murders in 1975, and 121 in 2012. (a 56% drop in homicide rate, as population went up 14%)
That car theft number is blowing my mind. I would have easily guessed 10x that.
Are there any aspects of the crime that make it less appealing? Electronic counter measures too good? Price of replacement parts no longer carry a premium? Too easy to get caught?
I would bet that the pervasive use of electronic records has something to do with it, too. According to this 1979 report from the Nat'l Assoc. of Attorneys General, in the 70s there were a lot of paths to retitling a stolen vehicle back then, which along with the the rise of chop shops and easier export of stolen cars, supported a large stolen-car economy: https://www.ojp.gov/pdffiles1/Digitization/59904NCJRS.pdf
Consumer goods went on a 50 year deflation streak while health care, housing, and education pumped to the moon. That's its own problem, but it's hard to steal any of those three things.
US taxpayers have funneled untold billions of dollars into these states to prop up their real estate industry by providing FEMA flood insurance at hugely subsidized rates, while the private insurance that's becoming unaffordable only covers wind damage. When you see those photos of towns wiped away by a hurricane, that's flood damage, and your tax dollars are paying to rebuild those houses in the same place. Over and over.
In a just world there would be legal liability for some of the real estate industry folks who persuaded people to put all of their net worth into an asset that's going to be blown away in the next hurricane, and other consequences for the politicians who enabled and encouraged them. The odds of this happening in real life seem to be about zero.
reply