Hacker News new | past | comments | ask | show | jobs | submit login

If only we had connectors which could actually handle such currents. Maybe something along the lines of an XT90, but no Nvidia somehow wants to save a bit of space or weight on their huge brick of a card. I don't get it.



The USB-C connectors on laptops and phones can deliver 240 watts [1] in a 8.4x2.7mm connector.

12VHPWR is 8.4x20.8mm so it's got 7.7x the cross-sectional area but transmits only 2.5x the power. And 12VHPWR also has the substantial advantage that GPUs have fans and airflow aplenty.

So I can see why someone looking at the product might have thought the connector could reasonably be shrunk.

Of course, the trick USB-C uses is to deliver 5A at 48v, instead of 50A at 12v

[1] https://en.wikipedia.org/wiki/USB-C#Power_delivery


Nobody thought that they could push 50A at 12V through half the connector. It's management wanting to push industrial design as opposed to safety. They made a new connector borrowing from an already existing design, pushed up the on paper amperage by 3A, never changed the contact resistance, and made the parent connector push current near it's limit (10.5A max vs 8.3A). And oh, the insertion force is so, so much higher than ever before. Previous PCIe connectors push about 4A through a connector designed for about 13A.

Worth also mentioning that the same time the 12VHPWR connector was being market tested was during Ampere, the same generation where Nvidia doubled down on the industrial design of their 1st party cards.

Also there's zero devices out there that actually deliver or take 240W over USB-C. Texas Instruments literally only released the datasheets for a pivotal supporting IC within the last 6 months.


> Also there's zero devices out there that actually deliver or take 240W over USB-C. Texas Instruments literally only released the datasheets for a pivotal supporting IC within the last 6 months.

The 16-inch Framework laptop can take 240W power. For chargers, the Delta Electronics ADP-240KB is an option. Some Framework users have already tried the combination.


> Nobody thought that they could push 50A at 12V through half the connector.

If you're saying that the connector doesn't have a 2x safety factor then I'd agree, sure.

But I can see how the connector passed through the design reviews, for the 40x0 era cards. The cables are thick enough. The pins seem adequate, especially assuming any GPU that's drawing maximum power will have its fans producing lots of airflow; plenty of connectors get a little warm. There's no risk of partial insertion, because the connector is keyed, and there's a plastic latch that engages with a click, and there's four extra sense pins. I can see how that would have seemed like a belt-and-braces approach.

Obviously after the first round of melted connectors they should have fixed things properly.

I'm just saying to me this seems like regular negligence, rather than gross negligence.


The spec may say it, but I've never encountered a USB-C cable that claims to support 240 watts. I suspect if machines that tried to draw 240W over USB-C were widespread, we would see a lot of melted cables and fires. There are enough of them already with lower power draw charging.


Search Amazon for "240W USB" and you get multiple pages of results for cables.

A few years ago there was a recall of OnePlus cables that were melting and catching fire, I had 2 of them and both melted.

But yes 240W/48V/5A is insane for a spec that was originally designed for 0.5W/5V/100mA. I suspect this is the limit for USB charging as anything over 48V is considered a shock hazard by UL and 5A is already at the very top of the 3-5A limit of 20AWG for fire safety.


We've had a variety of 140W laptops for a few years already, so the original spec has been far away for a while now.

The advantage of USB-C is the power negotiation, so getting the higher rating only on circuits that actually support it should de doable and relatively safe.

The OnePlus cable melting give me the same impression as when hair power cables melt: it's a solved problem, the onus is on the maker.


240W cables are here but at around a 10x price premium. Also cables are chipped so e.g. a 100W cable won't allow 240 in the first place.

Users needing the 240W have a whole chain of specialized devices, so buying a premium cable is also not much of an issue.


The connector could reasonably be shrunk. It just now has essentially no design margin so any minor issue immediately becomes major! 50A DC is serious current to be treated with respect. 5A DC is sanely manageable.


If only we had electrical and thermal fuses that could be used to protect the connectors and wires.


At these wattages just give it its own mains plug.


> At these wattages just give it its own mains plug.

You might think you're joking, but there are gamer cases with space for two PSUs, and motherboards which can control a secondary PSU (turning both PSUs on and off together). When using a computer built like that, you have two main plugs, and the second PSU (thus the second mains plug) is usually dedicated to the graphics card(s).


I've done this, without a case, not because I actually used huge amounts of power, but because neither PSU had the right combination of connectors.

The second one was turned on with a paperclip, obviously.

Turns out graphics cards and hard drives are completely fine with receiving power but no data link. They just sit there (sometimes with fans at max speed by default!) until the rest of the PC comes online.


You can also hookup a little thingy that takes sata power on one side and 24 pin on the other. As soon as there is power on sata side, relay switches and second PSU turns on.


This may not be fast enough for some add-in cards. It would be better to connect the PS_ON (green) cable from both ATX24 connectors together, so that the motherboard turns on both PSUs simultaneously.

This would still have the disadvantage that the PWROK (grey) cable from the second PSU would not be monitored by the motherboard, leaving the machine prone to partial reset quirks during brown-outs. Normally a motherboard will shut down when PWROK is deasserted, and refuse to come out of reset until it returns.


The joke actually removes this connector problem though, while a secondary PSU does not.


Server systems already work like this for redundancy.


No they don't. Server-grade redundant PSUs usually use a CRPS form factor, where individual PSU modules slot into a common multi-module PSU housing known as a PDB (power distribution board). Each module typically outputs only 12V and the PDB manages the down-conversion to 5V, 5VSB, and 3.3V. From there, there is only one set of power cables between the PDB and the system's components including the motherboard and any PCIe add-in cards. Additionally, there is a PMBus cable between the PDB and the motherboard so that the operating system and the motherboard's remote management interface (e.g. IPMI) can monitor the status of each individual module (AC power present, measured power input, measured power output, measured voltage input, measured frequency input, fan speeds, temperature, which module is currently powering the system, etc).

PSUs can be removed from the PDB and replaced and reconnected to a source of power without having to shut down the machine or even remove the case lid. You don't even need to slide the machine out of the rack if you can get to the rear.

Example:

https://www.fspgb.co.uk/_files/ugd/ea9ce5_d90a79af31f84cd59d...


You can have the machine draw twice the amount from the server PSUs. It kills the redundancy, but it is supposed to work.


But that still only happens over one set of power cables, from the PDB. The post you replied to described using a separate PSU with separate component power cables to power specific components. Current sharing in server PSUs is handled by every PSU equally powering all of the components.

Edit: For example, in a 3+1 redundant setting, 3 PSUs would be active and contributing toward 1/3 of the total load current each; 1 PSU would be in cold standby, ready to take over if 1 of the others fails or is taken offline.


Not without precedent: The Voodoo 5 6000 by 3dfx came with its own external PSU almost 25 years ago.

https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQqLXew...


Also put it in a separate case, and give it an OcuLink cable to attach to the main desktop tower. I suspect that's exactly where we're heading, to be fair.


I've built video rigs that did just that. An external expansion chassis that you could put additional PCIe cards when the host only had 3 slots. The whole eGPU used to be a cute thing, but it might have been more foreshadowing than we realized.


Have you measured latency?

In modern(last 4 years approximately) GPUs, physical wiring distance is starting to contribute substantially to latency.


latency due to wiring distances is far from being an issue in these scenarios. The signals travel at the speed of light. 186 miles per millisecond.

The problem you will encounter with pcie gen5 risers is signal integrity.


> The signals travel at the speed of light

It's about 75-90% the speed of light, but even that's too slow.

Modern hardware components are getting to latencies in single digit nanoseconds. Light travels about 30cms in a nanosecond, so extending a pcie port to a different box is going to have a measurable difference.


A single round trip isn't going to register, but there are multiple in a frame, so it's not inconceivable that it could add up at some point. I would like to see it demonstrated, though.


Without one of these rigs, you would not be able to do much at all because of the limited PCIe slots in the host. "not much" here means render times into the hours per clip to even longer. With the external chassis and additional cards, you could achieve enough bandwidth for realtime playback. Specific workflow would have been taking Red RAW camera footage that takes heavy compute to debayer, running whatever color correction on the video, running any additional filters like noise removal, finally writing the output back to something like a ProRes. Without the chassis, not happening, with the chassis you can do realtime playback during the session and faster than realtime during rendering/exporting.

Also, these were vital to systems like the MacPro Trashcan that had 0 PCIe slots. This system was a horrible system, and everyone I know that had one reverted back to their latest 2012 cheese grater systems with the chassis.

There was another guy I know that was building his own 3D render rig for his own home experimental use when those render engines started using GPUs. He built a 220v system that he'd unplug the dryer to use. It had way more GPU cards than he had slots for by using PCIe splitters. Again, these were not used to draw realtime graphics to a screen. They were solely compute nodes for the renderer. He was running circles around the CPU only render farm nodes.

People think that the PCIe lanes are the limiting factor, but again, that's just for getting the GPUs data back to the screen. As compute nodes, you do not need full lanes to get the benefits. But for doubting Thomas types like you, I'm sure my anecdotal isn't worth much


There were no latency concerns. These were video rigs, not realtime shoot'em ups. They were compute devices running color correction and other filters type of thing, not pushing a video signal to a monitor 60fps 240Hz refresh nonsense. These did real work /s


Ah makes sense, the other kind of graphics!


We could also do like we do on car audio, just two big fat power cables, positive and negative, 4awg, or even bigger with a nice crimped ferrule or lug bolted in.


true. at this prices they might as well include a power brick and take responsibility of the current carrying path from the wall to the die.


> If only we had connectors which could actually handle such currents.

The problem isn't connectors, the problem (fundamentally) is to share electric connectivity between multiple conductors.

Sure, you can run 16A over 1.5 mm² wires, and 32A over 2.5 mm² (taken from [1], yes it's for 230V but that doesn't matter, the current is important not the voltage). And theoretically you could run 32A over 2x 1.5 mm² (you'd end up with 3 mm² cross section), but it's not allowed by code as when, for any reason, either of the two legs disconnects entirely or has increased resistance e.g. due to corrosion or a loose screw / wire nut (hence, please always use Wago style clamps - screws and wire nuts are not safe, even if torqued properly which most people don't), suddenly the other leg has to carry (much) more current than it's designed for and you risk anything from molten connectors to an outright fire. And that is what NVidia currently is running into, together with bad connections (e.g. due to dirt ingress).

The correct solution would be for the GPU to not tie together the incoming individual 12VHPWR pins on a single plane right at the connector input but to use MOSFETs and current/voltage sense to detect stuff like different current availability (at least it used to be the case with older GPUs that there were multiple ways to supply them with power and only, say, one of two connectors on the GPU being used) or overcurrents due to something going bad. But that adds complexity and, at least for the overcurrent protection, yet another microcontroller plus one ADC for each incoming power pin.

Alternatively each 12VHPWR pair could get its own (!) DC-DC converter down to 1V2 or whatever the GPU chip actually needs, but again that also needs a bunch of associated circuitry.

Another and even more annoying issue by the way is grounding - because all the electricity that comes in also wants to go back to the PSU and it can take any number of paths - the PCIe connector, the metal backplate, the 12VHPWR extra connector, via the shield of a DP cable that goes to a Thunderbolt adapter card's video input to that card, via the SLI connector to the other GPU and its ground...

Electricity is fun!

[1] https://stex24.com/de/ratgeber/strombelastbarkeit


> The correct solution would be for the GPU to not tie together the incoming individual 12VHPWR pins on a single plane right at the connector input but to use MOSFETs and current/voltage sense to detect stuff like different current availability (at least it used to be the case with older GPUs that there were multiple ways to supply them with power and only, say, one of two connectors on the GPU being used) or overcurrents due to something going bad. But that adds complexity and, at least for the overcurrent protection, yet another microcontroller plus one ADC for each incoming power pin.

> Alternatively each 12VHPWR pair could get its own (!) DC-DC converter down to 1V2 or whatever the GPU chip actually needs, but again that also needs a bunch of associated circuitry.

So as you say, monitoring multiple inputs happened on the older xx90s, and most cards still do it. It's not hard.

Multiple DC-DC converters is something every GPU has. That's the only way to get enough current. So all you have to do is connect them to specific pins.


> It's not hard

It still is because in the end you're dealing with dozens of amps on the "high" voltage side and hundreds of amps on the "low" (GPU chip) voltage side. The slightest fuck-up can and will have disastrous consequences.

GPUs these days are on the edge of physics when it comes to supplying them with power.


Let me rephrase.

Doing the power conversion is hard.

Realizing that you already have several DC converters sharing the load, and deciding to power specific converters with specific pins, is comparatively easy. And the 3090 did it.


This is the top end halo product. What's wrong with pushing the envelope? Should we all play tetris because "what's wrong with block graphics?".

I'm not defending the shitty design here, but I'm all for always pushing the boundaries.


Pushing the boundaries of a simple connector is not innovation, that's just reckless and a fire hazard.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: