I'm curious as to why no vendor has yet tried to push a 24V or 48V PSU rail standard to reduce the amperage pushed over these crappy pins. Of course then the board itself needs to be able to handle the higher voltage for down-conversion, but don't these systems usually use switching / Buck regulators as the first stage anyway, which could easily be tuned to handle the higher input voltage with minimal loss added?
> I'm curious as to why no vendor has yet tried to push a 24V or 48V PSU rail standard to reduce the amperage pushed over these crappy pins.
people on another HN thread yesterday literally were bitching about the cost (not safety - cost) of a $1 adapter being "forced" on them by NVIDIA and now you want to tell them they're gonna have to buy a whole new PSU? that's gonna be quite the comment thread.
but yeah you're right in general, gotta move to 48V to move forward, pushing infinite amps at low voltages is completely bass-ackwards. The pcie-add-in-card form-factor needs a rethink in general, the GPU is now the largest and most power-hungry part of the system by far, and power isn't the only problem imposed by that ancient AT standard. imo we should move to some standardized "module sizes" for GPUs so that we solve GPU sag and cooling at the same time.
the "pcie card" of the future should be a 12x6x4" prismatic module that slides on rails into a socket that provides a PCIe edge connector as well as a XT90 jag (or similar) for power, at a defined physical location, with 48V. Airflow should happen in a defined direction and manner, whether that's axial flow or blower-style.
also please for the love of god, hot air rises, the AT standard means axial coolers are pushing against convection. But we couldn't even get BTX to take off, which is a trivial modification of ATX to reverse the orientation and fix this problem, the DIY market is deeply deeply rooted in the 1980s and will not abandon ATX without some nudging. Enterprise and OEM are fine, their volumes are big enough to do their own thing, but consumers are spinning their wheels fighting IBM's 1980 mistakes.
It's gonna take government intervention, the EU is gonna have to step in and do it like with USB-C cables. The mess of arbitrary card dimensions slowly growing larger over time is the exact same problem that we had pre-standardization with USB too.
At 500W+ maybe the GPU should have it's own dedicated power supply with it's own power cable. And then the GPU manufacturer could optimize the voltages and amps since they control the whole power chain.
You could have some standardization, a simple power negotiation protocol "I can supply 500W at 24V or 48V" and a standard (big) connector.
A long time ago the display was powered from the main unit, but we moved passed that and now displays have their own power cable.
Take it as a reminder to disable the privacy leak (for Firefox set network.http.referer.XOriginPolicy to 1 or 2 [0]) and demand that browser vendors fix their defaults.
That's a cool capability and I didn't read deeply enough when I saw the new GAN charger come out to notice that.
I haven't used it but from the outside it all appears very thoughtfully and competently engineered. His products have some of the highest TDP support (passive cases and his PSUs) on the passive HTPC market and the hifi aesthetic is great. I'd love to get an H5 or an H1 (although for a HTPC it'd be nice to have the H1.SODD - the H1.TODD doesn't look nearly as nice imo), don't really need it but the cat would love it.
Streacom does some interesting stuff too (like the DB4 cube case). Not quite as high TDPs and I don't like their body/fin aesthetic as much, but the DB4 is cool. And Akasa as well, love the galactico design.
Mini-box is the company that originated the picoPSU design and they do a fair bit of other miniature-computing stuff as well, including for the embedded automotive market. They have 18650 UPS modules for NUCs and other motherboards, shows up as a battery in the OS. The M350 case is also pretty nice for an on-the-desk type mini-pc, clearly to hit $50 you have to minimize costs but it's really the nicest $50 case that I've used in a long time (NZXT S340 non-elite got to $60 sometimes). The first picoPSU died within a day for no reason that I can tell but the second one has been fine, with the one caveat that the laptop brick is not low-noise at all. At one point I tried putting my PCs line-outs into an AV switcher and I was getting a groundloop and I could hear a bunch of noise. It was the M350/picopsu, when I unplugged it the other two PCs (including a dell laptop) were fine. Maybe not a typical use-case but it probably isn't the best choice for your SDR host or whatever lol. But HDPlex does have those linear regs...
It's super amusing that the 5700G (get the 5750G on ebay if you want ECC) is roughly as fast as my 9900K in most things but it does it at like 60W in a SFF case, can go even lower without much hit with eco mode. And it also can sorta game (in non-action stuff or older stuff). One night for fun I decided to play Team Fortress 2 at 3440x1440 on it, and it actually got a reasonably playable framerate... with a maxframes-config it would drop to around 30fps in teamfights but hey, returning to my roots playing it on my thinkpad w510/fx880m lol. I might have been able to push it farther with some res reduction but it was doing OK generally actually. It's also (again, roughly, haven't actually benchmarked) around the same speed as my 9900K in H264 encoding. The power scales very little past stock (at least during CPU workloads), just set it for silent and let it run, eco would probably be even better.
170x55x50mm is a honking huge wall wart. Bigger than most laptop power supplies, which makes sense given that its transforming five times as much power as a typical laptop power supply.
Right that's why I compared it to the bricks that come with boat-anchor laptops, not normal ones.
Tho if you threw 2 of these PSUs side by side in a laptop bag, they'd be noticably thinner than any MacBook charger (29mm vs 25mm) even though each one delivers >250% the power.
I think at these sizes and weights, the GPU needs to become a board that you screw onto the case, like a second motherboard. Sagging solved. Now the noisy fan, replace it with a preinstalled water cooling block to be integrated with your overall cooling solution, voila: noise and weird airflow solved as well.
Exactly, that's what I mean by "on rails". Imagine sliding a server PSU into place and it latches as it seats... do that but with a "GPU module". The case itself needs to be part of the support to fix GPU sag, and that's only possible with dimensional and connectical standardization.
Smaller stuff that still reflects the "add-in-card" intention of the original design is fine, you can still have a bunch of PCIe slots for network cards and USB controllers if you want. But the GPU is not really an "add-in-card" anymore and it's problematic in a design sense to keep acting like it is, let alone to have every single product on the market doing something completely different dimensionally.
That's the "pre-USB-micro" era problem that USB faced.
There's no need for water though, that's a whole can of worms.
I think we should stop having them as separate cards altogether. GPU RAM already can't be socketed anymore and Apple Silicone has shown massive CPU improvements are possible with soldered ram. It hurts my DIY PC Enthusiast Spirit but buying a mainboard with CPU, GPU and RAM all soldered in for massive gains in efficiency, performance and potentially even price could well be worth it. The interconnection between CPU, GPU and their respective RAM could potentially be massively improved by this.
I am aware that modern gaming consoles are basically this but they make lot's of tradeoffs that a Desktop PC doesn't necessarily have to. Single Die GPU-CPU Combination for example. On the PC world you can reasonably pay twice as much and get twice the silicon for twice the performance, a tradeoff Consoles can't make.
and as for the server this could make essentially large 1U GPU racks possible by just having one big board per U with all the gpus in it.
All in one water loops aren't as great as they make them out to be. You still need fans on a radiator, they can be quieter because they're often larger. Pump failures are more common than fan failures and harder to repair. Traveling with a water loop is a nightmare unless you drain it first.
In any case, I've always used the standard water cooling kits and they never failed me (as opposed to constantly breaking fans and huge temperatures). Maybe it's my environment.
Speaking of USB-C and crazy design choices, I'm a little surprised someone hasn't just slapped a bunch of those on a graphics card to provide 100W at 20V each. Arguably the parts should be cheap since they are already available at scale, and 6 ports wouldn't be any larger than the problematic nvidia connector. I'm mostly joking, but for portable/mobile GPUs it doesn't seem that insane, that way there is also some choice as far as power adapter bulk.
> ... and now you want to tell them they're gonna have to buy a whole new PSU?
I saw the connector and did a double take, it has a third row of smaller signal pins which none of my existing power supplies have. Not even my newest which is a 750W from last year. Sone new gpu power connector. So I assume most people had to buy a new psu anyway.
I am all for higher voltage. 24V is the standard in industrial automation as well as off highway machinery and european trucks. 48V is common in telco but positive ground though it is common in industry; I use 48v supplies for small stepper and servo motor amplifiers for motion control.
People haven't had to buy new PSUs. The adapter has the circuitry built in to detect how many of the connections are populated. For each one, the 40xx cards will raise their power limits by 150W.
fun fact - that circuity is totally superfluous. Connector itself has 4 small pins underneath main ones specifically for detection. Nvidia did Apple here inventing stupid shit to make it proprietary. All this circuity does is 4 to 2 binary encoding.
The reason BTX failed is that it didn't present compelling arguments for enthusiasts at the time.
Were there high-end enthusiast-class BTX boards, comparable to the best ATX boards of the time? I don't recall seeing them in any numbers. Hell, even BTX cases from recognizable manufacturers are quite rare. It's obvious they were making the pitch to the largest OEMs, the Dells and Acers who can source their own cases and boards.
And the sales pitch was most compelling for them. If you've got a roasting Pentium 4 and a modest graphics card, you can fit them in the BTX "cooling channel" and keep them at a frosty 95c with a dinky heatsink and one or two cheap fans. That's not really exciting to an audience already saying "I can spend $75 on a kilogram of heatsink to get my CPU down to 70c, or $200+ on a custom-loop watercooling setup and get to 45c".
Maybe they needed to find a niche market for hobbyists to work with-- for example, the early "silent computing" crowd might have been more interested in marginal increases in cooling. After all, people are willing to consider other form factors when there's some compelling draw (see Mini-ITX)
I also wonder if the original design thesis still holds up. Once the Core series came out, "how do we cool this angry CPU" became less of a fixation, so it was probably easier to switch back towards commodity ATX designs. On the other hand, we've shot back past the original thermal situation and then some-- would the design still hold up with an unlimited 7950X drawing ~250W and a 4090 drawing 400w all trying to exhaust it into the cooling channel, or would they bake each other to death?
I do agree that today's GPUs are ridiculously long, and nonstandard designs make them difficult to manage. Given that most outward-facing drives are gone, and SSDs have replaced most spinning rust in enthusiast PCs, I wish we'd see more "flat desktop" style cases. The GPU isn't sagging because its weight is directed mostly straight down into the case's bracing, convection is probably a little better, but it doesn't really help the power nightmare. They always make the cheap photoshopped jokes about GPUs with mains power, but maybe that's a stopgap solution-- sell the 5090 with an external power brick supplying, say, 48V/12A, and there's no internal cable gore and the power system can be 48v everywhere it matters. Then there's no need to design the rest of the PSU to guess what nVidia will want tomorrow.
The workstation world sort of broadly standardises card length, allowing for support on 3 sides (PCI Bulkhead, PCI-E Slot, and a card extension with a blade to fit in a rail guide towards the front of the chassis). Even old cheese grater Apple Mac Pro's had this setup.
Interestingly though, there's a slow but steady push towards 24V on boats.
More electronics are available in 24V version or simply support both 12V and 24V.
And new boats tend to have a 12V and 24V bank. Electric sailboats usually have a 48V bank as well.
And in an entirely different area, 3D printing completed their transition from 12V to 24V some years ago, and currently there's an active push towards 48V.
I'm personally just looking forward to not spend a fortune on cables due to low voltages.
It's frustratingly expensive to carry 12V a decent distance with some higher current draws.
Actually, I thought the issue with automotive is that 48V really isn't enough.
My understanding is that actuators want to be operating at much higher voltages and that 100V-ish is the lowest they really want and that modern actuators would much prefer to be up at 200-400V.
The loss of a higher buck ratio is not insignificant. It requires higher voltage rated mosfets which generally have worse performance, as well as increased switching losses resulting in more heat where you least want it. Most other use cases for 24/48V involve significantly longer wires than in a PC case. I think the actual net win, if any, would be quite small.
I have a feeling this will make the DC-DC converters more bulky and hot on the card itself.
Not entirely sure though as I've never built those.
However in the longer term it's probably time to start optimizing for power efficiency just like Intel did after the pentium 4. Cards can't keep increasing requirements in this day of energy crisis. And the climate problem insures it'll be like that for a long time.
Generally the final stage is in the 1v ballpark. Giant buck converters are already the standard and we're still below 1kW for the foreseeable future so you don't even need something novel. Examples: https://www.powerelectronictips.com/powering-graphics-proces...
I sort of wish that we used something more like this[0] for add-on module power connectors. And a better mechanical design. The perpendicular card form-factor is silly.
As someone who tinkers with these things a lot, I'm happy for the voltage I'm playing with not to be increased. There's plenty of scope to improve the plug design without approaching anything near the fragility of this nVidia adapter. There's no pressing need to increase voltage.
It works the other way too. Low voltage is not really "safe", we can just tolerate the risk most of the time. Soak your hands in some saltwater & then grab 12 volt rails, it'll hurt like hell.
The fragility I think is the main concern here, like you said. I hate when I have three 12v, 8 pin connectors on my GPU. I would be ecstatic between that and a small connector that catches fire if it's not perfectly 100% seated and strain relieved.
Buildzoid on YouTube has a very good response to this - he does not agree that the fault is the (still terrible) wires at the back of the connector.
His reasoning is that the melting is happening down at the pin end of the connector, not at the back, and the pins that are melting the most are the edge ones, not the middle ones.
There's about 10 minutes of critique of the various ways other people have been evaluating it, which is interesting for sure, but the meat starts here: https://youtu.be/yvSetyi9vj8?t=744
The failure doesn't necessarily have to happen at the junctions for the junctions to be a cause. It also makes sense that the edge pins would heat up more because they're both surrounded by less mass and a likelier conduit to distribute current and heat based on the resistivity of the shown wiring.
Reminds me of the molded MOLEX to SATA adapters for HDDs that suffer from similarly shoddy wiring design and are prone to melting/fires. https://www.youtube.com/watch?v=fAyy_WOSdVc
> The failure doesn't necessarily have to happen at the junctions for the junctions to be a cause.
Indeed, it can also happen that because some connection is failing, too much current flows through the remaining ones, but this is clearly not what is happening here (because it's not the inner connectors that are melting, and also because Igor tried and couldn't replicate this). This really looks like heat from high resistance, and the outer connectors are simply those experiencing the most mechanical stress, and it's really easy for these 2-slit-connectors to get bend out of shape.
I agree, that does not make sense. There must be a problem with the connection to the pins, and I'd really like to see a close-up of the female connector on the problematic card, because if stuff gets bend there out of shape even slightly, you can quickly get high resistance.
EDIT: Watched the video further, and actually the cable has the female side and they're using 2-split connectors. Yeah, that's asking for trouble when you push 50 amps through those... Sheesh, such shoddy stuff for a 1600$ card, just incredible.
Buildzoid is great, but worth noting he has a very specific perspective which is that of a hardcore enthusiast (hence the name of the channel), and he often points that out, but here I think he ends up kinda just saying "why not just stick with the old ones", which is fine in comparison to something that sucks, but it doesn't hold up more generally for most people.
I get he may not care, but plugging four individual power cables into a GPU is a pain for consumers, and having a single connector for everything is great from the normal consumer perspective.
He eventually gives a good reason for it. The old cables have a safety factor of something close to 100%, where as the current connector has a margin of 15% or so. (Going by memory here).
The takeaway being, even with a small fault in the connector, there's more than enough margin to ensure safe delivery and therefor, no molten pins or terminals.
The current connector seems to lack this, and if one of the terminals does not make a solid connection, then the drop in effectiveness is enough to push it out of spec and cause a failure.
I do greatly enjoy Buildzoid, and I was eagerly awaiting his video (and one from Gamers Nexus; note they haven't commented yet) from someone who actually looked at this from an EE perspective. I recognize his video was titled "rambling", but he largely repeated himself for 30 minutes, and scattered the relevant datapoints in the video in a few short snippets. A little planning on his delivery would gave gone a long way.
I get that, but if we can provide a safe single cable option, it's clearly preferable for a number of reasons. I don't believe this converter being badly made means the whole spec is not viable.
How often do consumers plug power connectors into a GPU for that to be a problem? Spending a few minutes routing multiple power cables for an expansion card that once installed sits in place for years doesn't seem like a big deal to me.
Clearly it's not a huge deal, but it's still preferable. Not having to route for SATA drives thanks to the rise of M.2 is often praised by people who build PCs, and it's a similar kind of deal. A new spec that is more convenient is still a selling point vs the old spec on new PSUs.
The dongle is a short-term conversion because people have old PSUs. In the long term this standard will be built into modern PSUs and it becomes a single cable.
You have 6 pins on each row that are supposed to each have their own wire coming from the PSU. If you bend the connector and only 5 pins connect then you expect to get 5/6 of the power the connector can handle.
The adapter has connected all four wires to all six pins. Losing one of the four cables is actually irrelevant because that means less power is going into the pins. What matters is that if you are missing one pin, then all 300 watts (single row) will go through the remaining pins. Power density is now 60 W per pin. Lose another pin and you get 90W per pin over the four remaining pins. This wouldn't happen with a dedicated cable per pin.
I think we would see more problems then with this connector - while it is new to the consumer space, all of Nvidia's SXM boards use molex micro-fit connectors for power. For example my Dell C4140 has four 10-pin and four 8-pin micro-fit connections to the SXM2 board, and they are bent at 90 degrees in the 1u chassis.
I was about to post something similar but you already found something with more details.
The resistance between the male and female connectors can be much higher than what most of us have expected. While I was down the rabbit hole trying to power one of my SBC correctly, one of the findings was that the resistance introduced by the connector can go up to *0.5 ohm*(don't even mention those dodgy power supplies and cables). But even if the resistance is only 0.1 ohm, it's still a big problem as the current is for sure above 10A for a single pin, and then the heat dissipation will be over 10W, which will of course concentrated around the contacting point, which is close to the tip. This explains why the tip side was melting.
The 4090 is a stupid card pushed way past the efficiency sweetspot for a few percent in performance so NVidia can claim they have the fastest card. The power is insane - 600w is more than the rest of the computer. The cooling solutions are ridiculous - 3.5 slot card and 2.5kg for the rog strix version. Benchmarks show that you can set the power limit to 60% to get 95% of performance in games along with better thermals, noise, etc.
I wonder if Nvidia is cranking up the performance not for games, but other applications such as 3D rendering and AI/ML. If you look at the benchmarks for Blender [1], a popular open-source 3D software, you will see the 4090 easily beats out all the other competitors, with 2x the rendering speed of the second on that list, which is another Nvidia RTX card (3090) that used to be the fastest till the 4090 came along.
The formula A x B x C = X, where A is the number of vehicles in the field, B is the probable rate of failure, and C is the cost of out-of-court settlement for that failure
That line from Fight Club really needs a whole lotta asterisks and qualifications. There are plenty of forced-recall scenarios where the manufacturer would prefer to use the ABC<=X approach but are not allowed or simply over-ruled by government. Safety of life being the primary over-riding concern, for good reason.
I'm pretty surprised this issue has gotten as long in the tooth time wise as it has. I remember seeing a story about this ticking-timebomb power connector like 1.5 months ago and then heard nothing. I'm guessing there is scurrying behind the scenes between then and now, but you never know.
I'm pretty certain that safety of life is not the primary over-riding concern (also for good reason) - modern vehicles can be made much safer by requiring advanced safety features (see: https://www.consumerreports.org/car-safety/cars-with-advance...) - but these aren't mandatory because there is a cost associated with each life saved, and, at some point, we as a society don't value human life as much as $$$ - so that's the line we draw.
Safety of life is not a boundless good. All of the good things around us come at some risk to life. Of course, the government's liability to overvalue safety (or to be more neutral, to inconsistently determine the value of safety) in a particular proceeding (be it before a court in a lawsuit or before a regulatory agency that might order a recall), is just another part of the calculation.
Ok, and? Aside from the pure theory banter, I expect the government to come down hard on a manufacturer when they knowingly ship a device with a sustained heat/power issue resulting in fires.
The fight club reference is to internal information and decision making processes, it doesn’t imply that the company is the only thing that can decide to do a recall.
Which cards are even competitive with the 3090, much less the 4080? It's perfectly acceptable for Nvidia to admit they were wrong about power scaling if they offer an unlocked mode for idiots in fireproof houses.
I got a nice settlement check from AMD who had to admit that their 8 core CPUs were actually 4 core CPUs with dual parallel math units. I'm pretty sure a post-launch reduction in power and performance of these cards would qualify any current owners for a refund.
Nvidia are doing something like this now advertising their CUDA TOPS as double their actual value but with a footnote telling you this is for "sparse" multiplications. Here "sparse" means each dot-product can assume half of each input vector is zero.
It's far from certain that would fix the issue, though. If the problem is some edge case problem with the connection, what's to say it couldn't happen under standard load?
The thing is, the 4090 is the pushed-past-the-limit supercar of the generation, these things always have issues. It's just due to market conditions that they didn't release all the somewhat reasonable cards (4070, 4080) yet. They don't want to cannibalize 30xx series sales, they don't want to delay 40xx series, they want the top of the charts. shrug
Just don't buy a 4090. The technology is incredible, the model is stupid. Buy a 3080 Ti, or wait for a 4080. Like, don't buy a $1 million Bugatti, you could get a $250k supercar and it would probably be better for any real-world use. (But do people buying $50k BMWs complain about these things?)
I'm not sure people have been yelling that the spec or form factor are the problem, they've been yelling that NVIDIA is the problem, exactly for how they implemented and supply a 12V solution (which includes both the physical products they supply, and the messaging they've put out around it) which this article yet again underlines as being both real, and even worse than initially thought.
No need to beat about the bush, the "certain celebrities" is folks like JayzTwoCents who are willing to destroy their relationship with NVIDIA over this by showing that a company intentionally sells products they know are fire hazards, lied about that to the public, doubled down on the lies when they got called out on it the more people staretd reporting on it, and kept doing so all the way up to when official paperwork showed they knew about the issue and went ahead with it all along. The initial reports were "this is dangerous", but escalated to "NVIDIA knew this would could catch fire and pushed for sales anyway, this cannot be okay".
If this was a cable that just bricked cards, or even entire computers, that's a dick move but it's just a computer. No one died, just some hardware got destroyed. However, that's not what it does: it can literally start fires and people can die, and there is publicly available documentation that shows that NVIDIA is strictly liable (in the legal sense) for gross negligence (again in the legal sense). NVIDIA should be all kinds of sued and federally fined here.
I was with you until: "and fined for half their annual profits by the FCC". Not only is that not the FCC's job, but that's an insane fine for a relatively small problem. If anything, NVIDIA should be required to recall the adapters and replace them with working ones at their cost, and maybe pre-fund a settlement fund to pay out in cases of damage due to this connector.
Fair enough, changed it to just federally fined. But maybe going "we know this can cause fires, we're going to sell it anyway" should come with an insane fine. If you're willing to burn down the place for sales, see if you can recover from the same being done to you. And if you can't, maybe that should be the price for prioritising sales over human lives.
You mean the commission whose highest ever fine "omg so bigs" ever was a paltry 4 million dollars? Pass, this needs a bit more of an iron fist "we take half your revenue. If you go under, tough shit, you deserved it" approach.
I'd like to get better information, but google sure isn't finding me any, got a link that lists the fines issued per year?
And of course with that said, $27 million is nowhere near enough to hurt a company like NVIDIA, which pulls in SIX BILLION DOLLARS per quarter. Even if they slapped down a 30 million fine, that'd be less than half a day's revenue, and basically says "this is perfectly acceptable behaviour in our country, thank you for dining, here's the bill, no tip required, come again".
For megacorporations like NVIDIA, "a million" is not a big number, it's passive income that rolls in over a lunch break. Real fines for megacorps need to "b" rather than "m" to actually matter.
This is an overcorrection to OPs comment. If Nvidia knew about the very real risk of fire but continued to push unsafe products with misleading statements, that is a significant problem and hints at a fundamental rot within the company.
Agreed, and if this also seems like a valid case for holding individuals accountable if it can be shown they knowingly ignored or covered the situation. Perhaps criminally so if there were injuries.
They spend lots and lots of money to make sure this doesn't happen. It's more or less why the entire right wing legal movement (Federalist Society et al) exists.
Sadly, this. "Tort reform" is the name of the game, and it's depressing how well it's worked. Much like the more recent efforts to rebrand estate taxes as a "death tax" to bolster public support for the wealthy giving up less of their inherited fortunes.
If they were aware of the risks/problem then I do not think it is unfair punishment. We as a society need to have a stronger response to companies choosing profits over safety.
Setting people's houses on fire is in fact a large problem. Potentially setting houses on fire is also a problem, but it is "relatively smaller" than the liability of actually injuring or killing someone, or hundreds of people.
Melting connectors is not all that uncommon of an issue with shoddy electronics. Well designed hardware rarely has this problem, but if you buy enough no-name import junk, you'll probably run into this with other products. (I have) Most of the time it does not cause a fire.
This isn't okay, but it also isn't Surfside condo collapse levels of bad.
This is a high amperage connector. It is significantly more prone to catching on fire that the majority of connectors in no name import junk in case of it melting.
Fire is caused by heating something to its ignition temperature, and heat is a function of the conductor its flowing through just as much as it's a function of the current itself.
The current provided from a base-spec USB charger is enough to start a fire, given the right (er... wrong) conductor. Plenty of us remember lighting Estes rocket motors with flashlight batteries (zinc carbon, no less) as a kid.
When a junction melts, it is often the case that the connection itself will become much more resistive, which will make it heat up more and more. For an application where the PSU might be able to supply hundreds of Watts, it's much easier for things to get very hot and ignite.
This is despite the use of a good conductor.
And the issue is that when the power source is that much more powerful you're more likely to make whatever is nearby catch fire.
This isn't an "insane fine" or a "relatively small problem."
These things are dangerous and they had to know that they were dangerous when they shipped them. They deserve to get financially nuked for being so ridiculously irresponsible. Companies that put profits ahead of consumer safety shouldn't be allowed to have profits.
There's "they should've known, but it seems they didn't", which is bad engineering and should be punished somewhat.
On the other hand, this seems to be "they knew about it and went ahead anyway" a la Boeing 737 MAX, which is many, many levels above and it should be the equivalent to a financial nuke, IMHO. Recoverable, but something that is talked about in the company for the next 15 years.
Dare I say, if there's provably a person who was presented with this as a problem and overrode the decision so as not to delay the problem, there should be criminal liability for them?
Yes, there should be, in addition to financial penalties. Unfortunately, what is considered to be a "fair" punishment for a corporation rarely ever reaches the level of criminal prosecution and never forces the corporation to change its operating strategy. In an ideal world, corporations would be punished financially, criminally, and be forced to reorganize whenever they attempt to wipe obvious dangers under the rug for profit's sake.
An honest mistake is different from an informed decision.
The consequences are not small, and the action was apparently fully informed. Those two things combined constitute a very Big Deal deserving no forgiveness at all.
The last time I commented on a previous story about this debacle, people pointed out that molex melt all the time and that this wasn't a big deal. I saw that they were right that molex do melt sometimes, but these adapters have been on the market for what, a week? It felt like there was something wrong with the quality for it to happen so often with specifically these parts.
Melty Molices mostly affected the old 5 1/4" HDD connector and that was only really a problem because of extremely poor tolerances in third-party connectors leading to bent-open sockets. Mate-n-lok, which is the same type of connector (available in a gazillion variations), is used in huge numbers to this day and has zero reliability issues because the connector design is unproblematic if done correctly.
What's sorta new here is that these are genuine connectors - not aftermarket dreck - and they're already having issues at, as I understand it, around 70-80 % of the specced load (specified for >600 W, while these GPUs are normally limited to 450 W or so).
Molex connectors do indeed melt but rarely catch on fire. Molex connectors are made of nylon which melts as low as 170C.
The pictures of the internals of this adapter are absolutely horrifying. This is 737-MAX style gross negligence: Several consecutive worst practices/shortcuts that ensure catastrophic failure one way or another.
Hook up wire generally doesn't have an insulation rating over 105C (dry), so 170C melt point is fine (assuming that is some real transition temp, and it isn't just melting slower at 100C)
molex-to-SATA adapters melt all the time because the "molded" SATA connectors use thermoplastic that starts to soften well before it actually melts... and those plastic temperatures are low enough that you hit them in an average case. They are unsafe even for HDD usage let alone the (insane) people who use them for GPU mining rigs. Just get some custom 6-pin strings made, people, it's cheaper than burning your farm down lol
that said it's the SATA side that's the problem, not molex.
molex does have a reputation for scorching and arcing at higher current levels though. So does the Tamiya connector that is super common in the RC car world, terrible connector.
Fun fact: it doesn't even need to happen once for a company to be guilty of gross negligence when there's a public paper trail that shows they knew about the problem, then went ahead with selling products with that problem.
But: it's happened more than once now, which is the exact bar required to clear to count as evidence of a pattern of real world damages in court.
Too early to tell. Not only are the cards with these adapters just now getting into the hands of consumers but some people didn't realize there was melting taking place until they unplugged the adapter.
Having heard "fire hazard" from a nvidia fan as a justification for their user-hostile signed firmware (it blocks free drivers from using the cards at full speed), now I think we can put that excuse among the myths.
I see no good will in blocking free Linux drivers, just the same crap that earned Torvald's salute.
That they now opened a kernel driver is small improvement if the firmware remains signed.
Power is nearly-free in civilized countries, a couple thousand isn't much for the developer & hobbyist market the card is aimed at, and by virtue of being well-off, the expenses to cover those two demographics' housing costs in case of a fire is quite great.
These RTX4090 cards are absolutely gigantic, why would an engineer choose such a small power connector? There's plenty of room for largers connectors or multiple connectors.
Or at least have current sensing and thermal sensing inside the card on each supply rail at the connector header, so it can fault if something goes wrong.
Please shoot me down in flames if needed, but maybe the CPU should be on a riser board and the GPU on the mainboard? Look, it seems to me that Nvidia needs to come up with its own x86 reference platform that revolves around supplying enough resources to the GPU rather than relying on adding layer upon layer of bandaid solutions. Listen, I'm not an engineer, and I don't claim to understand voltage, amps and watts, but it seems plainly obvious that if Nvidia wants to keep pushing discreet components that need a lot of juice (as opposed to Apples SoC strategy), then they need to come up with their own robust reference platform.
I was looking at an old video card I have, the GeForce 4600ti, and remembering thinking it was _huge_ at the time compared to my Voodoo 3 3000 before it. Then the 8800gt seemed huge. Then the RX480. Then my current 3070.
At a certain point it's just stupid. My work laptop (Macbook Pro M1 Max) can do some very impressive stuff in such a small quiet package, and it just makes sense to go with a single large APU in the future.
I know what you are saying, I have an M1 Pro MBP, and the thing is as quiet as a church mouse.
It just seems to me like Apple (with the trashcan) painted themselves into a corner with thermals. Then Intel did the same by missing node generations and are just pumping more current (or voltage, heck I don't get electricity). Now Nvidia is doing the same now with an aging reference platform that needs an overhaul.
I feel that it's simplistic to say that Apple has the right strategy with APU's, because gamers want faster and faster gaming performance whereas Apple are making custom silicon for very narrow use cases (namely video editing) and consumers perusing the internet and organising photos.
Funnily enough, I do think that Apple have the right formula longer term for general purpose computing with APU's.
It just that if Nvidia want to keep pumping out bigger and bigger GPU's while upping the voltage and heat then things need to change.
They started this in the last generation where the founders cards had a different-but-similar connector.
In this generation, they would actually need 4x8pin to reach the 600W maximum these cards are supposed to be able to draw in a fully overclocked state, so really this just comes down to PCB space, but with the draw getting this high, the cable also has some signalling wires so the card knows how many Watts it should limit itself to.
As someone ignorant to this space, considering the PCB is already shorter than the heatsink, I assume the problem is more of re-arranging other components on the PCB than the actual physical space the final product takes up.
I mean, firstly just that consumers hate having to plug in a ton of cables: it's bad for aesthetics, it's annoying to manage the cables, etc...
The bigger reason is probably to save board space as nvidia have been shrinking the form factor of the actual boards so they can dedicate more room to flow-through cooling. Connectors take up a lot of board edge.
The new spec does also have data pins so the devices can negotiate for power which could in theory reduce issues and inform the consumer better about limits with future PSUs and devices.
So there are advantages to the new spec, both specifically with the form factor and more generally.
That's what I'm thinking, too. EVGA seems like a pretty reliable manufacturer in my experience, their bus-powered 1050 Ti lasted me 7 years and still runs in my brother's PC, to my knowledge. Wouldn't surprise me if Nvidia rejected some board revision EVGA suggested, which led to the ensuing fallout.
I'm still looking for a good replacement to my EVGA bus-powered 1050Ti. I just want a bus-powered GPU that can run desktop applications. I have a separate one with a power connector for ML stuff.
Are the new Intel GPUs bus powered? If not, they could make a killing off of frugal Linux users by offering a chip designed around the ~70w TDP of a PCI bus.
Please don't needlessly shit on Intel GPUs. The last thing we need is sustaining the Ngreedia and AMD price gouging GPU duopoly.
Intel GPUs are great if you want the best linux drivers and they do AV1 decode and encode.
If you're, for example, a gaming streamer, or you need to encode CCTV or other kinds of footage and stream it as cheaply as possible, or if you do video editing in DaVinci resolve or other apps, then the AV1 accelerator capabilities of such a budget GPU are a godsend.
If you aren't planning on using it for gaming, check out the Intel Arc Pro A40 instead. It's a nice little single thickness 50 watt bus powered card with modern specs.
That was on my short-list to look at, good to have another data-point. How is the Linux support? My most recent video card experience with AMD is a HD 6350 and that has very mediocre linux support (as became obvious when KDE and Firefox both started using GPU assisted rendering by default)
AMD open sourced their drivers and maintains them directly in the kernel now, ever since 2017. No issues at all for me using Ubuntu 20.04 or 22.04, which uses Wayland and GPU rendering by default.
This whole thing is cracking me up. I guess this is why the 600w card didn't show up. Maybe there was something to the pre launch reports about cards melting themselves.
You know what would teach Nvidia? Buying more 4090 cards at crazy prices. Vote with your wallets. Wait on actual lines to give them more money for stupid products. Moar powah, more upscaling crap, more latency. Moar
Connectors are not something that should ever get hot.
Throwing thermistors all over a GPU to prevent it from catching on fire, is like putting bandaids in a box of cheerios so when the razor blades cut you, you can stop the bleeding. Cheerios should instead never have razors in them.
Connectors are not something that should ever get hot.
> Correct, that's why you can have a system to check if any connector is getting hot.
> Throwing thermistors all over a GPU to prevent it from catching on fire, is like putting bandaids in a box of cheerios so when the razor blades cut you, you can stop the bleeding. Cheerios should instead never have razors in them.
No, it's more like having a smoke detector in a house. Sure, household equipment shouldn't suddenly catch fire, but sometimes they do.
Here I am, crimping wires for a 3D printer that will draw at most a quarter of the current, and NVIDIA is just giving out connectors with shoddily soldered connections? Shame! It also looks like they think strain relief is optional.
600 watts is an extraordinary amount of power for these tiny cables/connectors, thats something id think would be something like an Anderson Powerpole , or XT 90, or some kind of high power connector if i was building a little robot or something.
Soldering is not the correct solution for high power/current applications. Car manufacturers openly instruct "do not solder, use included crimp connectors" for fuel injector wiring repairs for example.
This is because soldered connections not suitable for applications subject to vibration or repeated motion. They will weaken and break. For stationary applications, soldered connections have superior electrical characteristics.
I'd be wary about extrapolating from cars or other vehicles: they expose their components to far more vibration and shocks than is typical in a house or datacenter.
Counterpoint: cars don't get their cables moved about so often, or bent at sharp angles for aesthetic reasons. Soldering without additional strain relief is a terrible idea - even for low currents.
I never said they don't - rereading, I should have used the word "manipulated", because I meant "moved (by a person)": a PC[1] gets opened more frequently often than a car hood.
1. Especially a pc with an RTX 40X0 card - because only enthusiasts are buying them at this point.
As an HPC system administrator who also works on hardware, I'd say that the servers' life are equally hard from power and temperature perspective. Their life is not easier because they are not vibrating.
A server under max load 7/24/365 is really testing its design limits.
The issue is how to best secure stranded wire to a terminal, not how to secure the terminal to the PCB. Soldering is obviously preferred for the latter case, but when dealing with stranded wire, a good crimp is actually better than a good soldered connection. Done properly, the crimp forms a cold weld that's as close to a perfect connection as you can make.
Soldering can work well too, but there are so many things that can go wrong. Heat can deform the connector housing, lowering the contact pressure. Solder might run down the connector and coat the contact surface, making it unreliable over time. It might also wick down into the insulated portion of the wire, turning what should be a flexible part of the interconnect into a reliability problem, especially in the presence of motion or vibration. Last but not least, lead-free solder complicates visual inspection, making it harder to reject bad connections at the factory.
You can make bad connections with either soldering or crimping, but all in all, crimping usually wins when it comes to cable assembly. In a production setting, it is easier to establish and enforce a high-quality process when all you need is the right crimping tool and die and a few minutes' training.
" Last but not least, lead-free solder complicates visual inspection, making it harder to reject bad connections at the factory."
I x-ray every soldering job out of the oven and off the irons where I work, and that's a TINY LED company. Visual inspections of solder joints can be very unreliable no matter which solder is used. A company making these sorts of cables should be using even a cheap $10K Scienscope at the minimum.
That's about vibration and bi-metallic corrosion, not current handling. Gas tight crimps solve both problems whereas solder fairs poorly. Low cost PCs can't be expected to tolerate either high vibration or the elements, so a soldered design with sufficient margin for imperfect conductors could work fine in a PC.
Way to give yourself a black eye NVidia. High power needs a margin for error and there is clearly none here.
"Low cost" here includes all GPUs hosted on ATX type expansion boards, intended for price sensitive markets. "Extremely reliable server hardware" (N+1 GPU and better) doesn't use this design. Typically in HPC systems CPUs and GPUs are mounted on planar daughterboards that bear no resemblance to conventional PC hardware: NVIDIA's HGX platform (used at AWS) for example.
soldering is a terrible idea for cables that need to flex, which is why you’ll typically only find them used in applications with major strain relief, or inside non-user-serviceable enclosures (e.g. inside an ATX PSU)
You need strain relief either way, soldering or crimping. A well soldered joint will have lower resistance than a crimped one and be more resistant to vibrations.
In most applications crimping offers better vibration resistance and better conductance. Most solder is on the order of 10x as resistive as pure copper, or more.
Welding can be better than solder or crimp, but it's uncommon except for applications like wire-bonding and battery cell termination.
In general, any application where you have a fixed end on one side and a free end on another (wire to board, wire to panel, wire to X), a crimp termination is by far the better choice.
The actual amount of contact area in a soldered joint is much higher though. The crimped joint will have the tangent of each strand on the connector, for lots of tiny little contact points. Soldering, you can take the same contact points but also add the higher resistance solder for all of the area in between, dramatically increasing the cross sectional area of the contact.
It will only melt if it’s either undersized or a bad joint to begin with. A properly-sized, well-made solder joint will have no reason to melt unless heated externally by something else failing.
Maybe I am running low on coffee because I felt like I was having a stroke while reading this post.
>However, the “safe” is only valid if e.g. the used supply lines from the power supply with “native” 12VHPWR connector have a good quality and 16AWG lines or at least the used 12VHPWR to 4x 6+2 pin adapter also offers what it promises.
Like what is this? I know those words, but that sentence makes no sense.
>However, the “safe” is only valid if e.g. the used supply lines from the power supply with “native” 12VHPWR connector have a good quality and 16AWG lines or at least the used 12VHPWR to 4x 6+2 pin adapter also offers what it promises.
However, the “safe” designation for the connector is only valid if e.g. the used cables from the power supply, with a “native” 12VHPWR connector, are of good quality, and the 16AWG cables or at least the used 12VHPWR-to-4x6+2-pin-adapter also live up to what they promise.
I really don't understand why power connectors aren't just + and - anymore. There's no reason for more than 2 connectors.
What is this 12-pin Molex bullshit? Why the hell would you need 12 pins? Use some Anderson PowerPole connectors, XT30/60/90, or some other connector designed to handle high currents.
Actually, that is NOT (as such), “how I feel” when I read articles like this - NOT written by professionals (But who is one anyways) - Confucius says, which I tend to agree, which if you, like me, concur.
I don’t think it’s valid to use “only” with “e.g”. “e.g” means it’s an example, which implies the existence of other cases that satisfy the criteria. “Only” implies some uniqueness of the subject.
“It’s legal to drive only if e.g. you have a driver’s license.”
It’s not a great way to say things but it is meaningful. The meaning is “only if [unspecified list of things], and [x] is an example of an item in that list”.
> However, the “safe” is only valid if e.g. the used supply lines from the power supply with “native” 12VHPWR connector have a good quality and 16AWG lines or at least the used 12VHPWR to 4x 6+2 pin adapter also offers what it promises.
Here’s my interpretation:
> However, the term “safe” is only valid if certain things are true, e.g., the used supply lines from the power supply with “native” 12VHPWR connector are of good quality…
The site is hosted on a .de domain, so there is a very chance that the author is German. It would have been nice if you had considered that before writing such angry and negative words about the author's work.
> xactly for how they implemented and supply a 12V solution (which includes both the physical products they supply, and the messaging they've put out around it) which this article yet again underlines as being both real, and even worse than initially thought.
I'd recommend people gloating to read the article.
EDIT: (see my edit below).
The conclussions are clear:
- The problem is NOT the new connection; that's fine. New PSUs come with a connection that does not need any adaptor and those are safe and work fine.
- The problem is a poor quality adaptor shipped with 4090s for people that buy a 1600$ GFX but then skimp on a new PSU and want to pair it with an old one (EDIT: skimp is out of place and victim blaming, I'd guess it would be more appropiate to have said here that NVIDIA and partners decided to add an adapter to avoid suggesting that users need a new PSU).
These adaptors are distributed by NVIDIA but build by a supplier. Igor's recommendation is, I quote: "NVIDIA has to take its own supplier to task here, and replacing the adapters in circulation would actually be the least they could do.".
EDIT: This comment can be misunderstood as me speculating whether the OP read the article or not. I am not speculating: the OP did not read the article, which claims the opposite of what the OP claims. The OP claims that 12V solutions are the issue, while the article states that they are fine, and as proof shows that new PSUs implement them correctly. In fact the _goal_ of the article is to set the record straight about this, by precising that the only problem is the quality of the adapter, not 12 V per se. So this comment is not an speculation about whether the OP read the article or not, but a response to set the record straight for those who might read OPs comment only, but not the article (I often come to HN for the comments more than the articles, so I'd find such a comment helpful myself).
"Skim on a new PSU" sounds like people are cheaping out or something. Many people already have a more than sufficient PSU and replacing it just for another plug is a waste of natural resources.
NVidia should just include an adaptor that's not a fire hazard. The consumers are not to blame here.
Other commenters are claiming that NVIDIA KNEW the adapter had an issue with melting and/or catching fire. If that's true, I still think NVIDIA still has 100% liability.
If it was late in the development cycle that this was discovered, then the proper thing to do would have been to delay the release, or just not include adapters and offer them later. It would have been a minor PR hit, but not nearly as bad as shipping adapters known to be faulty.
The problem isn't with older PSUs, they can work fine with a good adapter.
The problem is with the adapter design, it is not just one bad choice but multiple layers of negligence that compound the issue.
* The adapter has 6 pins all bridged together by thin connections that can break.
* 4 heavy gauge wires are attached to those pins with a surface mount solder joint. They're not through hole soldered which would provide more contact AND far greater strength. They're not crimped which would provide the best contact and strength.
* There's no strain relief. So if you hold the cables close to where they're soldered to the connector and flex it you can easily break those surface mount solder joints.
* Because the 6 pins / 4 wires are all bridged asymmetrically, some bridges have more current passing through them and if they're fatigued or damaged they'll have higher resistance. Higher resistance means more heat.
Overall it's just really poorly engineered on multiple levels. It's diarrhea poured over-top an open face shit sandwich.
Yep. In general whenever you rely on a solder joint to provide mechanical strength or support, you're in a state of sin. Running 600 watts through such a connection is pure negligence.
Yes. They should not have included any adapter at all.
Those with older PSUs should have had to make a decision about whether they want an adapter or not, and then pick an adapter of the appropiate quality.
I'd recommend people gloating to read the article.
This is a neat way to parallelize and scale up this swipe but it's still the same swipe:
Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."
Thanks. I've added an EDIT to clarify that I am not speculating about whether the OP read the article, and that the only point of my comment is to set the record straight for those who often just come to HN for the comments (like myself).
Just take it out. You don't know who has and hasn't read the article, how much they are 'gloating' and it's one of the oldest known bad tropes of internet forums which is why it's in the guidelines. Your comment only gets better without it.
The sentiment is agreeable though. You see smart people everywhere, but then turn to their products and these have obvious flaws. Not hidden ones, not triggered in particular situation. Ones you can’t ignore, which greet you and tell they are your new neighbors happy to meet you everyday.
MDN recently screwed up their search input. It stays collapsed until you click on it (some brightest mind decided it’d be so cool?), and also there is no I-cursor until you start typing. So confusing.
CDNJS recently “updated” their site so you see a search prompt, (maybe) click on it, start typing aaand it loses all letters you’ve typed before 3-4 seconds passed since the page load. Who allowed to push that into production? That there is some guy responsible for this may drive nuts.
And this is why hardware is hard; it requires good engineering culture. Unfortunately, being meticulous is directly opposed to profit.
You have look at the likes of Apple to find good hardware culture, where "image" has a higher priority, allowing that engineering culture to flex a bit more than usual.
The thing that sucks is a culture turning bad does not immediately result in loss of sales or revenue. Once you get big enough, your culture can decline for years or even decades.
Yet in the first, second, or even third year of terrible culture changes, a business will likely be patting themselves on the backs for how fast they got stuff out the door and how much money they saved by reducing head count.
It's not individual contributors responsible for bad quality.
It is corporate, investing, and all around culture that does not value quality and barely has a sense of it at all (Steve Jobs was an asshole, but he had a strong sense of quality, it doesn't matter if you agree with his choices, that's taste, it's hard to argue that things he drove apple to produce had quality)
People instead value deadlines, ever advancing middle manager careers, and tiny differences in margin percentages.
> It's not individual contributors responsible for bad quality.
> It is corporate
When dealing with corporations, that's _always_ the case.
Even if it turns out that a single person was able to make a large mistake that went unnoticed, guess what, it's still a corporate failure.
> People instead value deadlines, ever advancing middle manager careers, and tiny differences in margin percentages.
Of these, the managers trying to advance are the most harmful. I've seen situations where actions of managers trying to advance their careers actually caused harm to the company, but they were able to place the blame elsewhere. Some are pretty good at that.
This is definitely true, and is a cultural thing, and deadlines are like, one of the most quintessentially big corpo dysfunctions ever.
> It's not individual contributors
When you have a surgical complication, do you blame the admins and the hospital bureaucracy? No.
When you lose a court case, who's fault is it? Do you blame the clerks? Do you blame the Attorney General? No.
> does not value quality and barely has a sense of it at all
I haven't met an "individual contributor" who cares about that shit in my life. Among the ones who do a lot of work all day, the rare few, the guys with the greasy hair and the track pants who live alone in Sunnyvale for a year, they have consistently been the least interested in a holistic sense of quality.
You will say, oh the managers gave them the goals. Dude the managers do fucking nothing. They say the same shit to everyone, the "IC" has all the agency. They can work, or not work as much as they want.
In fact the hardest working ICs I know are relentlessly optimizing for meeting immediate, dull goals. The ICs are almost always working on consumer products that are tremendously shitty and buggy.
Listen, who the fuck is responsible? Everyone! Why do you give the trackpants guys a pass? They're dicks!
The conundrum is: how do you build an organization that can simultaneously scale super-linearly and retain a commitment to quality? There are plenty of businesses that pursue stable profitability or linear growth and maintain quality. But the faster an organization grows, the more it becomes metrics-driven, and outside of a regulated industry where quality is a requirement, it's hard to balance the priorities.
I'm reminded of Honeycomb's pointed experiment in setting up a rotating employee seat as a voting member of their board: https://www.protocol.com/workplace/board-of-directors-honeyc... - this type of thing can contribute towards striking the right balance at all levels of an organization. But it's definitely not a solved problem.
Apple did by having a quality dictator at the top. It would have lasted a lot longer but his unusual disposition led him to accidentally kill himself with a strange diet.
In other words you can’t have a company run by the MBA mindset, there must be a superseding set of values beyond business school values.
I guess you haven't used the medical system recently - despite monumental efforts by doctors, the standard of care is actually horrific when you take the whole system into account. It has turned into a system of keeping patients alive rather than delivering optimal outcomes, and much like the legal system, if you have someone knowledgeable who can put in considerable time on your case your outcomes will be a lot better.
The real issue is Moore's law is on its last legs and you can't squeeze blood from stone. So manufacturer resorted to increasing TDP and bringing us closer to voltage that are more likely to cause issues.
As for general shittiness. That's been going since - forever. Not sure why you're complaining. It's both incorrect and offtopic.
Race to the bottom is as old as human civilization.
Final frame composition can occur on the home PC to avoid input latency concerns.
I envision some combination of LFNs and diffusion; you could produce near approximations of likely desired scenes and have the client select and refine WRT its input and previous state.
That's not the dumbest idea in the world. The USB people had the sense to use a higher voltage to keep the I^2*R losses down.
USB PD 3.1 can handle 240 watts over an appropriately-rated Type C cable, so 3 such cables could handle the load with some margin to spare. PC power supplies could do worse than to move in that direction.
4090's TDP is 450W, and it demonstrably runs on 850W PSUs.
If the cables are rated high enough and the PSU is large enough, there is no wattage that is "less valid" than any other. The 3090 TI from the previous generation runs at 450W too, it just uses 3x6+2 cables rather than nvidia's fancy adapter.
That was supposed to be tongue-in-cheek, thus the “(estimated)” part.
Hooray for the clarity of sarcasm in text. Sigh.
I know it’s not that bad. A standard wall circuit can’t supply that much power. But I find the power levels graphics cards have gotten to crazy and think that’s what we should be solving, not how to supply them more power without things melting.
The connector is shoddy, and it's good entertainment value for me personally who can't afford the card.
But that beast of a card, despite all the amps and watts it swallows, is a very affordable tool for computational tinkerers--think people working in their garage in the next molecular simulation software or AI application--not to mention graphical artists and architects. True, you can get more computational power by paying for cloud services, but a cloud workstation with 32 Gigabytes of RAM without any GPU will cost you 1700 USD/mo in AWS, and you will have to connect through a RDP interface that just hurts.
So, back to the connector, I hope Nvidia and third party manufacturers solve it. I will wait to burn the bridges until the day when they decide they won't sell the cards anymore and instead rent them at the same price that Amazon does. We don't need another Adobe.
A fire hazard is a major concern no matter what the card can or can’t do.
Anyway, for home tinkering the 4090 is generally a minor improvement over a 4080 or even vastly cheaper cards. More is better, but not always by very much.
That said, the real value of cloud services is you can sometimes get more done with 2TB of RAM for 4 hours than you can with 32GB of RAM for 4 months. All without the need to have that much computing power anywhere near you.
> That said, the real value of cloud services is you can sometimes get more done with 2TB of RAM for 4 hours than you can with 32GB of RAM for 4 months. All without the need to have that much computing power anywhere near you.
I know that. Alas, I can't get more programming done with 2TB of RAM in 4 hours than I can get with 32GB in three years. Human limitations.
People pretending to do professional work with low budget/quality workstation. If you cannot afford to operate a 4090 according to specs, you cannot blame NVIDIA if things go wrong.
Eh? What are you talking about? The part melting and damaging the card is the adapter that's literally provided by Nvidia. How is this "not according to spec"????