Hacker News new | past | comments | ask | show | jobs | submit | Numerlor's comments login

What is Ubuntu's source code worth for when you download precompiled binaries without checking if they were built with that source code?


That's your choice to do that and depending on your threat model, you may have some level of trust in Canonical to not screw over their customers.


The vast majority of people run just one gpu, which motherboards have a dedicated direct to CPU x16 slot for. Stealing lanes comes into play with chipset connected slots


I bought a Gigabyte X870E board with 3 PCIe slots (PCIe5 16x, PCIe4 4x, PCIe3 4x) and 4 M.2 slots (3x PCIe5, 1x PCIe 4). Three of the M.2 slots are connected to the CPU, and one is connected to the chipset. Using the 2nd and 3rd M.2 CPU-connected slots causes the board to bifurcate the lanes assigned to the GPU's PCIe slot, so you get 8x GPU, 4x M.2, 4x M.2.

I wish you didn't have to buy Xeon or Threadripper to get considerably more PCIe lanes, but for most people I suspect this split is acceptable. The penalty for gaming going from 16x to 8x is pretty small.


For a moment I didn't believe you, then I looked at the X870E AORUS PRO ICE (rev. 1.1) motherboard [1] and found this:

> 1x PCI Express x16 slot (PCIEX16), integrated in the CPU:

> AMD Ryzen™ 9000/7000 Series Processors support PCIe 5.0 x16 mode

> * The M2B_CPU and M2C_CPU connectors share bandwidth with the PCIEX16 slot.

> When theM2B_CPU orM2C_CPU connector is populated, the PCIEX16 slot operates at up to x8 mode.

[1]: https://www.gigabyte.com/Motherboard/X870E-AORUS-PRO-ICE-rev...


IIRC, X870 boards are required to spend some of their PCIe lanes on providing USB4/Thunderbolt ports. If you don't want those, you can get an X670 board that uses the same chipset silicon but provides a better allocation of PCIe lanes to internal M.2 and PCIe slots.


Even with a Threadripper you're at the mercy of the motherboard design.

I use ROG board that has 4 PCIe slots. While each can physically seat an x16 card, only one of them has 16 lanes -- the rest are x4. I had to demote my GPU to a slower slot in order to get full throughput from my 100GbE card. All this despite having a CPU with 64 lanes available.


I don't think Threadripper platform is to blame that you bought a board with potentially the worst possible pcie lane routing. Latest generation has 88 usable lanes at minimum, most boards have 4x 16x, and Pro supports 7x Gen 5.0 x16 links, an absolutely insane amount of IO. "At the mercy of motherboard design"- do the absolute minimum amount of research and pick any other board?


You're using 100GbE ... in an end-user PC? What would you even saturate that with?


I wouldn't think it's about saturating it during normal use; rather, simply exceeding 40 Gbit/s, which is very possible with solid-state NASes.


Okay, but then I need to ask what kind of use case doesn't mind the extra latency from ethernet but does care about the difference between 40Gbps and 70Gbps.


Though for the most the performance cost of going down to 8x PCIe is often pretty tiny - only a couple of percent at most

[0] shows a pretty "worst case" impact of 1-4% - that's on the absolute highest-end card possible (a geforce 5090) and pushing it down to 16x PCIe3.0. A lower end card would likely show an even smaller difference. They even showed zero impact from 16xPCIe4.0, which is the same bandwidth as 8x of the PCIe5.0 lanes supported on X870E boards like you mentioned.

Though if you're not on a gaming use case and know you're already PCIe limited it could be larger - but people who have that sort of use case likely already know what to look for, and have systems tuned to that use case more than "generic consumer gamer board"

[0] https://gamersnexus.net/gpus/nvidia-rtx-5090-pcie-50-vs-40-v...


>I wish you didn't have to buy Xeon

But that's the whole point of Intel's market segmentation strategy - otherwise their low-tier workstation Xeons would see no market.


I wonder how this works. I'm typing this on a machine running an i7-6700K, which, according to Intel, only has 16 lanes total.

It has a 4x SSD and a 16x GPU. Their respective tools report them as using all the lanes, which is clearly impossible if I'm to believe Intel's specs.

Could this bifurcation be dynamic, and activate those lanes which are required at a given time?


For Skylake, Intel ran 16 lanes of pci-e to the CPU, and ran DMI to the chipset, which had pci-e lanes behind it. Depending on the chipset, there would be anywhere from 6 lanes at pci-e 2.0 to 20 lanes at pci-e 3.0. My wild guess is that a board from back then would have put m.2 behind the chipset and no cpu attached ssd for you; that fits with your report of the GPU having all 16 lanes.

But, if you had the nicer chipsets, wikipedia says your board could split the 16 cpu lanes into two x8 slots or one x8 and 2 x4 slots, which would fit. This would usually be dynamic at boot time, not at runtime; the firmware would typically look if anything is in the x4 slots and if so, set bifurcation, otherwise the x16 gets all the lanes. Some motherboards do have PCI-e switches to use the bandwidth more flexibly, but those got really expensive; i think at the transition to pci-e 4.0, but maybe 3.0?


Indeed. I dug out the manual (MSI H170 Gaming M3), which has a block diagram showing the M2 port behind the chipset, which is connected via DMI 3 to the CPU. In my mind, the chipset was connected via actual PCIe, but apparently, it's counted separately from the "actual" PCIe lanes.


Intel's DMI connection between the CPU and the chipset is little more than another PCIe x4 link. For consumer CPUs, they don't usually include it in the total lane count, but they have sometimes done so for Xeon parts based off the consumer silicon, giving the false impression that those Xeons have more PCIe lanes.


The real PITA is when adding the NVMe disables the SATA ports you planned to use.


Doesn't this usually only happen when you put an M.2 SATA drive in? I've never seen a motherboard manual have this caveat for actual NVMe M.2 drives. And encountering an M.2 SATA drive is quite rare.


I have a spare-parts NAS on a Z170 (Intel 6k/7k) motherboard with 8 SATA ports and 2 NVME slots - if I put an x2 SSD in the top slot it would disable two ports, and if it was an x4 it would disable four! Luckily the bottom m2 slot doesn’t conflict with any SATA ports, just an expansion card slot. (The board supports SATA Express even - did anything actually use that?)

SATA ports are far scarcer these days though and there’s more PCIE bandwidth available anyways, so it’s not surprising that there aren’t conflicts as often anymore.


Nope, for AM5, both of the available chipsets[1] have 4 serial ports that can be configured as x4 pci-e 3.0, 4x sata, or two and two. I think Intel does similar, but I haven't really kept up.

[1] A620 is cut down, but everything else is actually the same chip (or two)


As some others have pointed out, there are some motherboards in which, if you use M.2 cards on the wrong slot, it will turn your 16x GPU slot into 8x.


Still better than my washer that shows 1 minute and then did on for 6 more, instead of adjusting the time in the middle of the load


Ha, I got them beat with RES and filtering out every generic subreddit that touches politics


The energy is transferred to the planet from its orbit when it stretches, and then dissipated as heat from friction


You usually keep old mails around that malware can then silently forward, this is a problem for unencrypted data. To authenticate through email, even if possible, there are hoops the attacker would need to go through and you'd likely be notified of e.g. a password reset mail


We could integrate expiry dates for emails after which they get deleted. That's feasible.


While more informative on its own, it's still kind of useless with the wide variance of power usage across models.

Laptops use Wh but you still need to do the same research as you would have if it was in mAh (ignoring the 100Wh max for planes that you'd have to calculate yourself)


The problem is that mAh tells you absolutely nothing about battery capacity. An mAh rating is useful if you're comparing two single-cell batteries using the same chemistry - but that's not what we are doing in practice.

For example, a 100Wh battery could be built using a single li-ion cell, which means it would be 27,000mAh. Same battery using an LFP cell? 31.250mAh. But wait, it's a laptop - it's far more efficient to put multiple cells in series. Four li-ion cells in series would mean a capacity of 675mAh, or 540mAh if you use five cells.

The five-cell one is probably more efficient, but if you blindly look at the mAh rating it looks like it has a significantly smaller capacity. Nobody is going to buy that, so the marketing team will convert it back to "single-cell equivalent" and put 27,000mAh on the box while still keeping an 18.5V rating, and people who passed a highschool physics class are going to think it's a 500Wh battery.

It gets even worse with LFP, because now the marketing team is asking you to convert it to "single-sell li-ion equivalent" and the figure is now completely unrelated to what's actually happening! That 520mAh 6-cell LFP battery? Just put a 27,000mAh label on it: who cares, bigger is better.


You can't directly map it to the win key, but you can use the keyboard manager to remap one (or both) of the win keys to a shortcut you then assign to command palette https://u.numerlor.me/zRq6PA


Alcohol would dry up faster, demineralised water will have more time to dissolve and redeposit particles. Though you have to be careful with alcohol as it can destroy some plastics


I've been running zfs on root on my Debian home server, only the install was a tiny bit more involved but it was done in maybe half an hour going off of the guide and trying to understand everything it was doing with 0 experience (... And then did it again after I broke networking after an hour lol)


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: