Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Incredible. Traditional 10Gb network gear is very expensive to buy and to operate. $50 is a lot cheaper.


not really true, a similar setup with 40Gbe (three nodes, no switch) would run you less than $100 for pcie cards and cables


The only NICs I see in that price range are old Mellanox cards.

Intel NICs are 5-10x that price. I'm not sure why, but my suspicion is that it has to do with driver support and/or interoperability.


I run mellanox connectx3 cards, they work immediately with no extra drivers on windows 10/11 and every linux I've tried

mellanox is/was quite good at getting code upstreamed

maybe I need to do my own blog post about my pile of computers...


> maybe I need to do my own blog post about my pile of computers...

Yes, several of us love to read about that! I haven't switched to 10 Gbit/s yet...


> I run mellanox connectx3 cards, they work immediately with no extra drivers on windows 10/11 and every linux I've tried

Own three, can confirm for Windows and Linux.


would love to read this!


Mellanox works fine.


And with Mellanox you get working RMDA/ROCE.


Gotcha. Why the price discrepancy then?


I can't answer definitively, but I was looking for SFP cards recently and the older cards don't really support ASPM. The cards themselves aren't power hogs, but they keep the CPU from going into lower states during idle.

The cheapest one I found that others related had ASPM actually working was the Intel X710, and those are much more expensive than the ConnectX-3.


Are you comparing new Intel cards to old mellanox cards on ebay? If not idk why, I have not compared them myself, some feature maybe? Cost doesn't always make sense either.


Im just comparing the prices I see when I search ebay for "40gbe nic" vs "40gbe nic intel", making no effort to compare features.


I avoiding older 10gbe NICs due to power consumption. Maybe that's why?


Can you provide links? I'll upgrade my home lab from 10g to 40g immediately at that price.


In 2024 I would suggest deploying 2x25G instead, via e.g. MCX4121. Pricing is similar (<$30 NICs), but:

* 2x25G throughput is higher than 40G,

* 25G latency is lower than 40G,

* you can use 25G ports as 10G ports, and

* you can use DACs to connect 4x25G <=> 100G

That last point is particularly relevant given the existence of switches like the Mikrotik CRS504, providing 4x100G ports on 25W.


those are all reasonable points, if I were doing mine again I would spend a little more and go up to 100gbe

if you run all older mellanox gear the cx3 can do the kinda nonstandard 56gbe as well


What are you doing that you need 100gbe?

I am still on 1gbe... I guess I don't transfer anything bigger than a few GiB time to time.


No individual connection other than like a cr tral storage server needs 100gbe, at least for me, but a 100gbe backplane is good for a lot of 1gbe poe devices as an example. With residential fiber/coax reaching 5gb, 1gb is not enough.


Mellanox connect-x 3 probably.


cx353a and cx354a, prefer the fcbt models, but you can reflash them all to fcbt


What does fcbt stand for and why would I want it?


the same hardware was sold at different prices with different capabilities enabled based on the exact model variant, stuff like higher speeds and infiniband support

you can see them all in the datasheet, I believe fcbt is the one with all the stuff enabled




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: