Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LPCAMM2 takes less actual space on the board than 4 dimm slots, which is what it will be replacing. Mainly because wiring it is simpler and it can be closer to the CPU socket.


One module - i.e. the equivalent of 2 slots - takes less space than 4 dimm slots?

The customer loses here.


Almost nobody runs 4 sticks of unbuffered DDR5 anyway, because you wouldn't be able to get them to run above 4800 and you'll be spending several minutes waiting for memory training. And you get a minimum of 16GB per stick unless you buy modules with dead ranks.

There's so little margin to total signal degradation in DDR5, these improvements are actually necessary (and more upgrade friendly than the inevitable future of memory on SoC).


> And you get a minimum of 16GB per stick unless you buy modules with dead ranks.

16 Gb per stick means you need 4 sticks to have a reasonably future proof 64 G...


That's the smallest size, 32 on a single module also exists. There is no conspiracy to deprive you of density. Most consumers will just appreciate latency over more than 64GB of memory. And yes, that's a real world performance gain, especially on larger AMD chips where L3 isn't shared among all cores.

If you want density, use a workstation board and registered memory, which offloads a bunch of responsibility of a central memory controller to the module itself, so you can stick terrabytes of somewhat fast memory in there.


> Most consumers will just appreciate latency over more than 64GB of memory.

Note that the choice here is not either/or, so long as you are willing to pay for it. For example:

https://www.corsair.com/us/en/p/memory/cmk96gx5m2b6600c32/ve...


yep, that's hynix m-die.

not sure what the status quo is, but you used to be able to buy a cheap kit as long as the timings indicated hynix, and they'd manually overlock quite high without issues.


The slots are not independently upgradeable on DDR5.

Firstly, everyone uses dual channel. You don't buy a single module, you buy two at a time.

Secondly, speeds go down so much for 2DPC that if you currently have 32GB (total) and want to have 64GB, the correct update path is not to purchase an additional 32GB, it is to toss your current 2 16GB modules and purchase two new 32GB modules.

CAMM2 just recognizes this, and first puts all channels on a single module, and secondly gets rid of that stupid unusable second connector, in exchange for having room in the standard for larger module sizes.


> CAMM2 just recognizes this

Ah, recognizes that we customers are being fucked over by the vendors chase for bigger numbers in benchmarks. Got it, thanks.

Edit: based on all the explanations I got, DDR5 reminds me of ... the Pentium 4.


RAM signalization is very sensitive to data line length, more so if you use low power RAM modules.

This is not a "bigger number in a useless benchmark" thing, if you can standardize the shortest and fastest routing possible with a socket, everybody can use faster RAM for less (RAM command waits are the biggest killer of responsiveness), and everyone can enjoy faster, more responsive systems.

Moreover, bigger RAM means bigger caches in any OS, and having these caches on faster storage means snappier, faster systems. Lastly, games and other engineering software use great amounts of RAM and moves tons of data between GPU RAM and system RAM. Having faster pipes helps.

It's not about benchmarks, it's about real world performance.


Is it? They're preventing you from having more ram in the name of having a limited amount of ram that is faster.

Which, as you said yourself, is useless if your working set doesn't fit in your limited amount of ram.

> RAM command waits are the biggest killer of responsiveness

... no, using swap is the biggest killer of responsiveness :)


I'm looking some of the latest Intel processors' specifications:

    - Intel Core Ultra 9 185H: 2 memory channels.
    - Intel Core Ultra 7 165U: 2 memory channels.
    - Intel Core i9-14900KF: 2 memory channels.
The last one is a 6GHz, 24 core, 32 thread, 253W behemoth, and even that has two memory channels.

What's the benefit of having four slots, and adding more wait states and reducing bandwidth in a system this powerful? That doesn't make sense.

Instead, I'd rather have two channels in a module, and change the whole RAM in one swoop. If you are unsure, over-speccing a system slightly at the start won't hurt on the long run.

> using swap is the biggest killer of responsiveness

I have bad news for you, then. Even if you have tons of free RAM, unused pages are still pushed to your swap. On the other hand, while swap heavy computation is a last-resort in high load systems, command waits are with you the moment you touch your power button.


> If you are unsure, over-speccing a system slightly at the start won't hurt on the long run.

Hey, this is what the Apple haters are complaining about :)

Seems to be coming on Wintel too.

> unused pages are still pushed to your swap

On Windows? Pretty sure that both on Mac OS and Linux I can have the swap mostly unused. Or even run without swap on Linux.


> On Windows? Pretty sure that both on Mac OS and Linux I can have the swap mostly unused.

Nope, on Linux. My macOS system is not using its swap at the moment, but I saw it's using it when both memory pressure was low and there was free space. OTOH, on my 32GB Linux system, current state is as follows:

                   total        used        free      shared  buff/cache   available
    Mem:           31894       10051        2967         169       19498       21843
    Swap:          15722        2506       13216
This is system has not been stressed to full RAM since last boot. It's Linux kernel's own choice to move these pages to swap over time.

> Or even run without swap on Linux.

This is how Kubernetes systems and most HPC clusters run, and hard-lock the moment you hit 0 on "free" column.


It's not really a vendor thing but signal integrity issue.

CAMM2 just makes a lot more sense with DDR5. And fact is that we most likely want our next standard to be even faster so the issue is just going to get worse with DDR6 or whatever the next one we are going for is.


> It's not really a vendor thing but signal integrity issue.

So, running into the same limits that CPUs have been running into for a while?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: